00:00:00.001 Started by upstream project "autotest-per-patch" build number 132535 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.216 > git --version # 'git version 2.39.2' 00:00:00.216 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.954 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.965 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.978 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.978 > git config core.sparsecheckout # timeout=10 00:00:05.990 > git read-tree -mu HEAD # timeout=10 00:00:06.006 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.035 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.036 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.152 [Pipeline] Start of Pipeline 00:00:06.164 [Pipeline] library 00:00:06.166 Loading library shm_lib@master 00:00:06.166 Library shm_lib@master is cached. Copying from home. 00:00:06.183 [Pipeline] node 00:00:06.192 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.194 [Pipeline] { 00:00:06.203 [Pipeline] catchError 00:00:06.205 [Pipeline] { 00:00:06.214 [Pipeline] wrap 00:00:06.221 [Pipeline] { 00:00:06.230 [Pipeline] stage 00:00:06.232 [Pipeline] { (Prologue) 00:00:06.469 [Pipeline] sh 00:00:06.754 + logger -p user.info -t JENKINS-CI 00:00:06.771 [Pipeline] echo 00:00:06.772 Node: WFP6 00:00:06.781 [Pipeline] sh 00:00:07.082 [Pipeline] setCustomBuildProperty 00:00:07.095 [Pipeline] echo 00:00:07.097 Cleanup processes 00:00:07.104 [Pipeline] sh 00:00:07.391 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.391 3465632 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.401 [Pipeline] sh 00:00:07.680 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.680 ++ grep -v 'sudo pgrep' 00:00:07.680 ++ awk '{print $1}' 00:00:07.680 + sudo kill -9 00:00:07.680 + true 00:00:07.693 [Pipeline] cleanWs 00:00:07.702 [WS-CLEANUP] Deleting project workspace... 00:00:07.702 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.708 [WS-CLEANUP] done 00:00:07.713 [Pipeline] setCustomBuildProperty 00:00:07.725 [Pipeline] sh 00:00:08.003 + sudo git config --global --replace-all safe.directory '*' 00:00:08.084 [Pipeline] httpRequest 00:00:08.655 [Pipeline] echo 00:00:08.658 Sorcerer 10.211.164.101 is alive 00:00:08.668 [Pipeline] retry 00:00:08.670 [Pipeline] { 00:00:08.684 [Pipeline] httpRequest 00:00:08.689 HttpMethod: GET 00:00:08.689 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.690 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.700 Response Code: HTTP/1.1 200 OK 00:00:08.701 Success: Status code 200 is in the accepted range: 200,404 00:00:08.701 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.658 [Pipeline] } 00:00:15.668 [Pipeline] // retry 00:00:15.672 [Pipeline] sh 00:00:15.954 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.971 [Pipeline] httpRequest 00:00:16.775 [Pipeline] echo 00:00:16.776 Sorcerer 10.211.164.101 is alive 00:00:16.784 [Pipeline] retry 00:00:16.787 [Pipeline] { 00:00:16.802 [Pipeline] httpRequest 00:00:16.807 HttpMethod: GET 00:00:16.808 URL: http://10.211.164.101/packages/spdk_b09de013a5df946650e14acd608a19c0cce22140.tar.gz 00:00:16.808 Sending request to url: http://10.211.164.101/packages/spdk_b09de013a5df946650e14acd608a19c0cce22140.tar.gz 00:00:16.830 Response Code: HTTP/1.1 200 OK 00:00:16.830 Success: Status code 200 is in the accepted range: 200,404 00:00:16.831 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b09de013a5df946650e14acd608a19c0cce22140.tar.gz 00:02:12.908 [Pipeline] } 00:02:12.927 [Pipeline] // retry 00:02:12.935 [Pipeline] sh 00:02:13.220 + tar --no-same-owner -xf spdk_b09de013a5df946650e14acd608a19c0cce22140.tar.gz 00:02:15.776 [Pipeline] sh 00:02:16.065 + git -C spdk log --oneline -n5 00:02:16.065 b09de013a nvmf: Get metadata config by not bdev but bdev_desc 00:02:16.065 971ec0126 bdevperf: Add hide_metadata option 00:02:16.065 894d5af2a bdevperf: Get metadata config by not bdev but bdev_desc 00:02:16.065 075fb5b8c bdevperf: Store the result of DIF type check into job structure 00:02:16.065 7cc16c961 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:02:16.076 [Pipeline] } 00:02:16.091 [Pipeline] // stage 00:02:16.100 [Pipeline] stage 00:02:16.103 [Pipeline] { (Prepare) 00:02:16.119 [Pipeline] writeFile 00:02:16.135 [Pipeline] sh 00:02:16.420 + logger -p user.info -t JENKINS-CI 00:02:16.433 [Pipeline] sh 00:02:16.718 + logger -p user.info -t JENKINS-CI 00:02:16.732 [Pipeline] sh 00:02:17.022 + cat autorun-spdk.conf 00:02:17.023 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.023 SPDK_TEST_NVMF=1 00:02:17.023 SPDK_TEST_NVME_CLI=1 00:02:17.023 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:17.023 SPDK_TEST_NVMF_NICS=e810 00:02:17.023 SPDK_TEST_VFIOUSER=1 00:02:17.023 SPDK_RUN_UBSAN=1 00:02:17.023 NET_TYPE=phy 00:02:17.035 RUN_NIGHTLY=0 00:02:17.040 [Pipeline] readFile 00:02:17.091 [Pipeline] withEnv 00:02:17.094 [Pipeline] { 00:02:17.108 [Pipeline] sh 00:02:17.427 + set -ex 00:02:17.427 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:17.427 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:17.427 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.427 ++ SPDK_TEST_NVMF=1 00:02:17.427 ++ SPDK_TEST_NVME_CLI=1 00:02:17.427 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:17.427 ++ SPDK_TEST_NVMF_NICS=e810 00:02:17.427 ++ SPDK_TEST_VFIOUSER=1 00:02:17.427 ++ SPDK_RUN_UBSAN=1 00:02:17.427 ++ NET_TYPE=phy 00:02:17.427 ++ RUN_NIGHTLY=0 00:02:17.427 + case $SPDK_TEST_NVMF_NICS in 00:02:17.427 + DRIVERS=ice 00:02:17.427 + [[ tcp == \r\d\m\a ]] 00:02:17.427 + [[ -n ice ]] 00:02:17.427 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:17.427 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:17.427 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:17.427 rmmod: ERROR: Module irdma is not currently loaded 00:02:17.427 rmmod: ERROR: Module i40iw is not currently loaded 00:02:17.427 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:17.427 + true 00:02:17.427 + for D in $DRIVERS 00:02:17.427 + sudo modprobe ice 00:02:17.427 + exit 0 00:02:17.437 [Pipeline] } 00:02:17.453 [Pipeline] // withEnv 00:02:17.458 [Pipeline] } 00:02:17.476 [Pipeline] // stage 00:02:17.487 [Pipeline] catchError 00:02:17.489 [Pipeline] { 00:02:17.503 [Pipeline] timeout 00:02:17.503 Timeout set to expire in 1 hr 0 min 00:02:17.505 [Pipeline] { 00:02:17.519 [Pipeline] stage 00:02:17.521 [Pipeline] { (Tests) 00:02:17.536 [Pipeline] sh 00:02:17.826 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:17.826 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:17.826 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:17.826 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:17.826 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.826 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:17.826 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:17.826 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:17.826 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:17.826 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:17.826 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:17.826 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:17.826 + source /etc/os-release 00:02:17.826 ++ NAME='Fedora Linux' 00:02:17.826 ++ VERSION='39 (Cloud Edition)' 00:02:17.826 ++ ID=fedora 00:02:17.826 ++ VERSION_ID=39 00:02:17.826 ++ VERSION_CODENAME= 00:02:17.826 ++ PLATFORM_ID=platform:f39 00:02:17.826 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:17.826 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:17.826 ++ LOGO=fedora-logo-icon 00:02:17.826 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:17.826 ++ HOME_URL=https://fedoraproject.org/ 00:02:17.826 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:17.826 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:17.826 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:17.826 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:17.826 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:17.826 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:17.826 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:17.826 ++ SUPPORT_END=2024-11-12 00:02:17.826 ++ VARIANT='Cloud Edition' 00:02:17.826 ++ VARIANT_ID=cloud 00:02:17.826 + uname -a 00:02:17.826 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:17.826 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:20.364 Hugepages 00:02:20.364 node hugesize free / total 00:02:20.364 node0 1048576kB 0 / 0 00:02:20.364 node0 2048kB 0 / 0 00:02:20.364 node1 1048576kB 0 / 0 00:02:20.364 node1 2048kB 0 / 0 00:02:20.364 00:02:20.364 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.364 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:20.364 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:20.364 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:20.364 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:20.364 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:20.364 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:20.364 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:20.364 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:20.364 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:20.364 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:20.364 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:20.364 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:20.364 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:20.364 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:20.364 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:20.364 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:20.364 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:20.364 + rm -f /tmp/spdk-ld-path 00:02:20.364 + source autorun-spdk.conf 00:02:20.364 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.364 ++ SPDK_TEST_NVMF=1 00:02:20.364 ++ SPDK_TEST_NVME_CLI=1 00:02:20.364 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.364 ++ SPDK_TEST_NVMF_NICS=e810 00:02:20.364 ++ SPDK_TEST_VFIOUSER=1 00:02:20.364 ++ SPDK_RUN_UBSAN=1 00:02:20.364 ++ NET_TYPE=phy 00:02:20.364 ++ RUN_NIGHTLY=0 00:02:20.364 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.364 + [[ -n '' ]] 00:02:20.364 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.364 + for M in /var/spdk/build-*-manifest.txt 00:02:20.364 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:20.364 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:20.364 + for M in /var/spdk/build-*-manifest.txt 00:02:20.364 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.364 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:20.364 + for M in /var/spdk/build-*-manifest.txt 00:02:20.364 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.364 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:20.364 ++ uname 00:02:20.364 + [[ Linux == \L\i\n\u\x ]] 00:02:20.364 + sudo dmesg -T 00:02:20.625 + sudo dmesg --clear 00:02:20.625 + dmesg_pid=3467073 00:02:20.625 + [[ Fedora Linux == FreeBSD ]] 00:02:20.625 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.625 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.625 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.625 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.625 + sudo dmesg -Tw 00:02:20.625 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.625 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.625 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.625 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.625 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.625 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.625 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.625 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.625 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.625 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.625 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:20.625 19:03:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:20.625 19:03:43 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:20.625 19:03:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:20.625 19:03:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:20.625 19:03:43 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:20.625 19:03:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:20.625 19:03:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:20.625 19:03:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:20.625 19:03:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.625 19:03:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.625 19:03:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.625 19:03:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.625 19:03:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.625 19:03:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.625 19:03:43 -- paths/export.sh@5 -- $ export PATH 00:02:20.625 19:03:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.625 19:03:43 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:20.625 19:03:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:20.625 19:03:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732644223.XXXXXX 00:02:20.625 19:03:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732644223.cb82ku 00:02:20.625 19:03:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:20.625 19:03:43 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:20.625 19:03:43 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:20.625 19:03:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:20.625 19:03:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.625 19:03:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:20.625 19:03:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:20.625 19:03:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.625 19:03:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:20.625 19:03:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:20.625 19:03:43 -- pm/common@17 -- $ local monitor 00:02:20.625 19:03:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.625 19:03:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.625 19:03:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.625 19:03:43 -- pm/common@21 -- $ date +%s 00:02:20.625 19:03:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.625 19:03:43 -- pm/common@21 -- $ date +%s 00:02:20.625 19:03:43 -- pm/common@25 -- $ sleep 1 00:02:20.625 19:03:43 -- pm/common@21 -- $ date +%s 00:02:20.625 19:03:43 -- pm/common@21 -- $ date +%s 00:02:20.625 19:03:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732644223 00:02:20.625 19:03:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732644223 00:02:20.625 19:03:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732644223 00:02:20.625 19:03:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732644223 00:02:20.885 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732644223_collect-cpu-load.pm.log 00:02:20.885 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732644223_collect-vmstat.pm.log 00:02:20.885 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732644223_collect-cpu-temp.pm.log 00:02:20.885 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732644223_collect-bmc-pm.bmc.pm.log 00:02:21.824 19:03:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:21.824 19:03:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.824 19:03:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.824 19:03:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:21.824 19:03:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.824 Tue Nov 26 06:03:44 PM UTC 2024 00:02:21.824 19:03:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.824 v25.01-pre-261-gb09de013a 00:02:21.824 19:03:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:21.824 19:03:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.824 19:03:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.824 19:03:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:21.824 19:03:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:21.824 19:03:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.824 ************************************ 00:02:21.824 START TEST ubsan 00:02:21.824 ************************************ 00:02:21.824 19:03:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:21.824 using ubsan 00:02:21.824 00:02:21.824 real 0m0.000s 00:02:21.824 user 0m0.000s 00:02:21.824 sys 0m0.000s 00:02:21.824 19:03:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.824 19:03:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.824 ************************************ 00:02:21.824 END TEST ubsan 00:02:21.824 ************************************ 00:02:21.824 19:03:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:21.824 19:03:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:21.824 19:03:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:21.824 19:03:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:21.824 19:03:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:21.824 19:03:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:21.824 19:03:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:21.824 19:03:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:21.824 19:03:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:22.082 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:22.083 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:22.341 Using 'verbs' RDMA provider 00:02:35.512 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:47.726 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:47.726 Creating mk/config.mk...done. 00:02:47.726 Creating mk/cc.flags.mk...done. 00:02:47.726 Type 'make' to build. 00:02:47.726 19:04:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:47.726 19:04:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:47.726 19:04:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:47.726 19:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.726 ************************************ 00:02:47.726 START TEST make 00:02:47.726 ************************************ 00:02:47.726 19:04:10 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:47.726 make[1]: Nothing to be done for 'all'. 00:02:49.120 The Meson build system 00:02:49.120 Version: 1.5.0 00:02:49.120 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:49.120 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.120 Build type: native build 00:02:49.120 Project name: libvfio-user 00:02:49.120 Project version: 0.0.1 00:02:49.120 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.120 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.120 Host machine cpu family: x86_64 00:02:49.120 Host machine cpu: x86_64 00:02:49.120 Run-time dependency threads found: YES 00:02:49.120 Library dl found: YES 00:02:49.120 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.120 Run-time dependency json-c found: YES 0.17 00:02:49.120 Run-time dependency cmocka found: YES 1.1.7 00:02:49.120 Program pytest-3 found: NO 00:02:49.120 Program flake8 found: NO 00:02:49.120 Program misspell-fixer found: NO 00:02:49.120 Program restructuredtext-lint found: NO 00:02:49.120 Program valgrind found: YES (/usr/bin/valgrind) 00:02:49.120 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.120 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.120 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.120 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.120 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:49.120 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:49.120 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.120 Build targets in project: 8 00:02:49.120 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:49.120 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:49.120 00:02:49.120 libvfio-user 0.0.1 00:02:49.120 00:02:49.120 User defined options 00:02:49.120 buildtype : debug 00:02:49.120 default_library: shared 00:02:49.120 libdir : /usr/local/lib 00:02:49.120 00:02:49.120 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.687 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:49.687 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:49.687 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:49.687 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:49.687 [4/37] Compiling C object samples/null.p/null.c.o 00:02:49.687 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:49.687 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:49.687 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:49.687 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:49.687 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:49.687 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:49.687 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:49.687 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:49.687 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:49.687 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:49.687 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:49.687 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:49.687 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:49.945 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:49.945 [19/37] Compiling C object samples/server.p/server.c.o 00:02:49.945 [20/37] Compiling C object samples/client.p/client.c.o 00:02:49.945 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:49.945 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:49.945 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:49.945 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:49.945 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:49.945 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:49.945 [27/37] Linking target samples/client 00:02:49.945 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:49.945 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:49.945 [30/37] Linking target test/unit_tests 00:02:49.945 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:50.202 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:50.202 [33/37] Linking target samples/gpio-pci-idio-16 00:02:50.202 [34/37] Linking target samples/server 00:02:50.202 [35/37] Linking target samples/null 00:02:50.202 [36/37] Linking target samples/lspci 00:02:50.202 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:50.202 INFO: autodetecting backend as ninja 00:02:50.202 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:50.202 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:50.769 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:50.769 ninja: no work to do. 00:02:56.132 The Meson build system 00:02:56.132 Version: 1.5.0 00:02:56.132 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:56.132 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:56.132 Build type: native build 00:02:56.132 Program cat found: YES (/usr/bin/cat) 00:02:56.132 Project name: DPDK 00:02:56.133 Project version: 24.03.0 00:02:56.133 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:56.133 C linker for the host machine: cc ld.bfd 2.40-14 00:02:56.133 Host machine cpu family: x86_64 00:02:56.133 Host machine cpu: x86_64 00:02:56.133 Message: ## Building in Developer Mode ## 00:02:56.133 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:56.133 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:56.133 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:56.133 Program python3 found: YES (/usr/bin/python3) 00:02:56.133 Program cat found: YES (/usr/bin/cat) 00:02:56.133 Compiler for C supports arguments -march=native: YES 00:02:56.133 Checking for size of "void *" : 8 00:02:56.133 Checking for size of "void *" : 8 (cached) 00:02:56.133 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:56.133 Library m found: YES 00:02:56.133 Library numa found: YES 00:02:56.133 Has header "numaif.h" : YES 00:02:56.133 Library fdt found: NO 00:02:56.133 Library execinfo found: NO 00:02:56.133 Has header "execinfo.h" : YES 00:02:56.133 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:56.133 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:56.133 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:56.133 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:56.133 Run-time dependency openssl found: YES 3.1.1 00:02:56.133 Run-time dependency libpcap found: YES 1.10.4 00:02:56.133 Has header "pcap.h" with dependency libpcap: YES 00:02:56.133 Compiler for C supports arguments -Wcast-qual: YES 00:02:56.133 Compiler for C supports arguments -Wdeprecated: YES 00:02:56.133 Compiler for C supports arguments -Wformat: YES 00:02:56.133 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:56.133 Compiler for C supports arguments -Wformat-security: NO 00:02:56.133 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.133 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:56.133 Compiler for C supports arguments -Wnested-externs: YES 00:02:56.133 Compiler for C supports arguments -Wold-style-definition: YES 00:02:56.133 Compiler for C supports arguments -Wpointer-arith: YES 00:02:56.133 Compiler for C supports arguments -Wsign-compare: YES 00:02:56.133 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:56.133 Compiler for C supports arguments -Wundef: YES 00:02:56.133 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.133 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:56.133 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:56.133 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.133 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:56.133 Program objdump found: YES (/usr/bin/objdump) 00:02:56.133 Compiler for C supports arguments -mavx512f: YES 00:02:56.133 Checking if "AVX512 checking" compiles: YES 00:02:56.133 Fetching value of define "__SSE4_2__" : 1 00:02:56.133 Fetching value of define "__AES__" : 1 00:02:56.133 Fetching value of define "__AVX__" : 1 00:02:56.133 Fetching value of define "__AVX2__" : 1 00:02:56.133 Fetching value of define "__AVX512BW__" : 1 00:02:56.133 Fetching value of define "__AVX512CD__" : 1 00:02:56.133 Fetching value of define "__AVX512DQ__" : 1 00:02:56.133 Fetching value of define "__AVX512F__" : 1 00:02:56.133 Fetching value of define "__AVX512VL__" : 1 00:02:56.133 Fetching value of define "__PCLMUL__" : 1 00:02:56.133 Fetching value of define "__RDRND__" : 1 00:02:56.133 Fetching value of define "__RDSEED__" : 1 00:02:56.133 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:56.133 Fetching value of define "__znver1__" : (undefined) 00:02:56.133 Fetching value of define "__znver2__" : (undefined) 00:02:56.133 Fetching value of define "__znver3__" : (undefined) 00:02:56.133 Fetching value of define "__znver4__" : (undefined) 00:02:56.133 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:56.133 Message: lib/log: Defining dependency "log" 00:02:56.133 Message: lib/kvargs: Defining dependency "kvargs" 00:02:56.133 Message: lib/telemetry: Defining dependency "telemetry" 00:02:56.133 Checking for function "getentropy" : NO 00:02:56.133 Message: lib/eal: Defining dependency "eal" 00:02:56.133 Message: lib/ring: Defining dependency "ring" 00:02:56.133 Message: lib/rcu: Defining dependency "rcu" 00:02:56.133 Message: lib/mempool: Defining dependency "mempool" 00:02:56.133 Message: lib/mbuf: Defining dependency "mbuf" 00:02:56.133 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:56.133 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:56.133 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:56.133 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:56.133 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:56.133 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:56.133 Compiler for C supports arguments -mpclmul: YES 00:02:56.133 Compiler for C supports arguments -maes: YES 00:02:56.133 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:56.133 Compiler for C supports arguments -mavx512bw: YES 00:02:56.133 Compiler for C supports arguments -mavx512dq: YES 00:02:56.133 Compiler for C supports arguments -mavx512vl: YES 00:02:56.133 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:56.133 Compiler for C supports arguments -mavx2: YES 00:02:56.133 Compiler for C supports arguments -mavx: YES 00:02:56.133 Message: lib/net: Defining dependency "net" 00:02:56.133 Message: lib/meter: Defining dependency "meter" 00:02:56.133 Message: lib/ethdev: Defining dependency "ethdev" 00:02:56.133 Message: lib/pci: Defining dependency "pci" 00:02:56.133 Message: lib/cmdline: Defining dependency "cmdline" 00:02:56.133 Message: lib/hash: Defining dependency "hash" 00:02:56.133 Message: lib/timer: Defining dependency "timer" 00:02:56.133 Message: lib/compressdev: Defining dependency "compressdev" 00:02:56.133 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:56.133 Message: lib/dmadev: Defining dependency "dmadev" 00:02:56.133 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:56.133 Message: lib/power: Defining dependency "power" 00:02:56.133 Message: lib/reorder: Defining dependency "reorder" 00:02:56.133 Message: lib/security: Defining dependency "security" 00:02:56.133 Has header "linux/userfaultfd.h" : YES 00:02:56.133 Has header "linux/vduse.h" : YES 00:02:56.133 Message: lib/vhost: Defining dependency "vhost" 00:02:56.133 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:56.133 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:56.133 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:56.133 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:56.133 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:56.133 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:56.133 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:56.133 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:56.133 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:56.133 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:56.133 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:56.133 Configuring doxy-api-html.conf using configuration 00:02:56.133 Configuring doxy-api-man.conf using configuration 00:02:56.133 Program mandb found: YES (/usr/bin/mandb) 00:02:56.133 Program sphinx-build found: NO 00:02:56.133 Configuring rte_build_config.h using configuration 00:02:56.133 Message: 00:02:56.133 ================= 00:02:56.133 Applications Enabled 00:02:56.133 ================= 00:02:56.133 00:02:56.133 apps: 00:02:56.133 00:02:56.133 00:02:56.133 Message: 00:02:56.133 ================= 00:02:56.133 Libraries Enabled 00:02:56.133 ================= 00:02:56.133 00:02:56.133 libs: 00:02:56.133 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:56.133 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:56.133 cryptodev, dmadev, power, reorder, security, vhost, 00:02:56.133 00:02:56.133 Message: 00:02:56.133 =============== 00:02:56.133 Drivers Enabled 00:02:56.133 =============== 00:02:56.133 00:02:56.133 common: 00:02:56.133 00:02:56.133 bus: 00:02:56.133 pci, vdev, 00:02:56.133 mempool: 00:02:56.133 ring, 00:02:56.133 dma: 00:02:56.133 00:02:56.133 net: 00:02:56.133 00:02:56.133 crypto: 00:02:56.133 00:02:56.133 compress: 00:02:56.133 00:02:56.133 vdpa: 00:02:56.133 00:02:56.133 00:02:56.133 Message: 00:02:56.133 ================= 00:02:56.133 Content Skipped 00:02:56.133 ================= 00:02:56.133 00:02:56.133 apps: 00:02:56.133 dumpcap: explicitly disabled via build config 00:02:56.133 graph: explicitly disabled via build config 00:02:56.133 pdump: explicitly disabled via build config 00:02:56.133 proc-info: explicitly disabled via build config 00:02:56.133 test-acl: explicitly disabled via build config 00:02:56.133 test-bbdev: explicitly disabled via build config 00:02:56.133 test-cmdline: explicitly disabled via build config 00:02:56.133 test-compress-perf: explicitly disabled via build config 00:02:56.133 test-crypto-perf: explicitly disabled via build config 00:02:56.133 test-dma-perf: explicitly disabled via build config 00:02:56.133 test-eventdev: explicitly disabled via build config 00:02:56.133 test-fib: explicitly disabled via build config 00:02:56.133 test-flow-perf: explicitly disabled via build config 00:02:56.133 test-gpudev: explicitly disabled via build config 00:02:56.133 test-mldev: explicitly disabled via build config 00:02:56.133 test-pipeline: explicitly disabled via build config 00:02:56.133 test-pmd: explicitly disabled via build config 00:02:56.133 test-regex: explicitly disabled via build config 00:02:56.133 test-sad: explicitly disabled via build config 00:02:56.133 test-security-perf: explicitly disabled via build config 00:02:56.133 00:02:56.133 libs: 00:02:56.133 argparse: explicitly disabled via build config 00:02:56.133 metrics: explicitly disabled via build config 00:02:56.133 acl: explicitly disabled via build config 00:02:56.133 bbdev: explicitly disabled via build config 00:02:56.134 bitratestats: explicitly disabled via build config 00:02:56.134 bpf: explicitly disabled via build config 00:02:56.134 cfgfile: explicitly disabled via build config 00:02:56.134 distributor: explicitly disabled via build config 00:02:56.134 efd: explicitly disabled via build config 00:02:56.134 eventdev: explicitly disabled via build config 00:02:56.134 dispatcher: explicitly disabled via build config 00:02:56.134 gpudev: explicitly disabled via build config 00:02:56.134 gro: explicitly disabled via build config 00:02:56.134 gso: explicitly disabled via build config 00:02:56.134 ip_frag: explicitly disabled via build config 00:02:56.134 jobstats: explicitly disabled via build config 00:02:56.134 latencystats: explicitly disabled via build config 00:02:56.134 lpm: explicitly disabled via build config 00:02:56.134 member: explicitly disabled via build config 00:02:56.134 pcapng: explicitly disabled via build config 00:02:56.134 rawdev: explicitly disabled via build config 00:02:56.134 regexdev: explicitly disabled via build config 00:02:56.134 mldev: explicitly disabled via build config 00:02:56.134 rib: explicitly disabled via build config 00:02:56.134 sched: explicitly disabled via build config 00:02:56.134 stack: explicitly disabled via build config 00:02:56.134 ipsec: explicitly disabled via build config 00:02:56.134 pdcp: explicitly disabled via build config 00:02:56.134 fib: explicitly disabled via build config 00:02:56.134 port: explicitly disabled via build config 00:02:56.134 pdump: explicitly disabled via build config 00:02:56.134 table: explicitly disabled via build config 00:02:56.134 pipeline: explicitly disabled via build config 00:02:56.134 graph: explicitly disabled via build config 00:02:56.134 node: explicitly disabled via build config 00:02:56.134 00:02:56.134 drivers: 00:02:56.134 common/cpt: not in enabled drivers build config 00:02:56.134 common/dpaax: not in enabled drivers build config 00:02:56.134 common/iavf: not in enabled drivers build config 00:02:56.134 common/idpf: not in enabled drivers build config 00:02:56.134 common/ionic: not in enabled drivers build config 00:02:56.134 common/mvep: not in enabled drivers build config 00:02:56.134 common/octeontx: not in enabled drivers build config 00:02:56.134 bus/auxiliary: not in enabled drivers build config 00:02:56.134 bus/cdx: not in enabled drivers build config 00:02:56.134 bus/dpaa: not in enabled drivers build config 00:02:56.134 bus/fslmc: not in enabled drivers build config 00:02:56.134 bus/ifpga: not in enabled drivers build config 00:02:56.134 bus/platform: not in enabled drivers build config 00:02:56.134 bus/uacce: not in enabled drivers build config 00:02:56.134 bus/vmbus: not in enabled drivers build config 00:02:56.134 common/cnxk: not in enabled drivers build config 00:02:56.134 common/mlx5: not in enabled drivers build config 00:02:56.134 common/nfp: not in enabled drivers build config 00:02:56.134 common/nitrox: not in enabled drivers build config 00:02:56.134 common/qat: not in enabled drivers build config 00:02:56.134 common/sfc_efx: not in enabled drivers build config 00:02:56.134 mempool/bucket: not in enabled drivers build config 00:02:56.134 mempool/cnxk: not in enabled drivers build config 00:02:56.134 mempool/dpaa: not in enabled drivers build config 00:02:56.134 mempool/dpaa2: not in enabled drivers build config 00:02:56.134 mempool/octeontx: not in enabled drivers build config 00:02:56.134 mempool/stack: not in enabled drivers build config 00:02:56.134 dma/cnxk: not in enabled drivers build config 00:02:56.134 dma/dpaa: not in enabled drivers build config 00:02:56.134 dma/dpaa2: not in enabled drivers build config 00:02:56.134 dma/hisilicon: not in enabled drivers build config 00:02:56.134 dma/idxd: not in enabled drivers build config 00:02:56.134 dma/ioat: not in enabled drivers build config 00:02:56.134 dma/skeleton: not in enabled drivers build config 00:02:56.134 net/af_packet: not in enabled drivers build config 00:02:56.134 net/af_xdp: not in enabled drivers build config 00:02:56.134 net/ark: not in enabled drivers build config 00:02:56.134 net/atlantic: not in enabled drivers build config 00:02:56.134 net/avp: not in enabled drivers build config 00:02:56.134 net/axgbe: not in enabled drivers build config 00:02:56.134 net/bnx2x: not in enabled drivers build config 00:02:56.134 net/bnxt: not in enabled drivers build config 00:02:56.134 net/bonding: not in enabled drivers build config 00:02:56.134 net/cnxk: not in enabled drivers build config 00:02:56.134 net/cpfl: not in enabled drivers build config 00:02:56.134 net/cxgbe: not in enabled drivers build config 00:02:56.134 net/dpaa: not in enabled drivers build config 00:02:56.134 net/dpaa2: not in enabled drivers build config 00:02:56.134 net/e1000: not in enabled drivers build config 00:02:56.134 net/ena: not in enabled drivers build config 00:02:56.134 net/enetc: not in enabled drivers build config 00:02:56.134 net/enetfec: not in enabled drivers build config 00:02:56.134 net/enic: not in enabled drivers build config 00:02:56.134 net/failsafe: not in enabled drivers build config 00:02:56.134 net/fm10k: not in enabled drivers build config 00:02:56.134 net/gve: not in enabled drivers build config 00:02:56.134 net/hinic: not in enabled drivers build config 00:02:56.134 net/hns3: not in enabled drivers build config 00:02:56.134 net/i40e: not in enabled drivers build config 00:02:56.134 net/iavf: not in enabled drivers build config 00:02:56.134 net/ice: not in enabled drivers build config 00:02:56.134 net/idpf: not in enabled drivers build config 00:02:56.134 net/igc: not in enabled drivers build config 00:02:56.134 net/ionic: not in enabled drivers build config 00:02:56.134 net/ipn3ke: not in enabled drivers build config 00:02:56.134 net/ixgbe: not in enabled drivers build config 00:02:56.134 net/mana: not in enabled drivers build config 00:02:56.134 net/memif: not in enabled drivers build config 00:02:56.134 net/mlx4: not in enabled drivers build config 00:02:56.134 net/mlx5: not in enabled drivers build config 00:02:56.134 net/mvneta: not in enabled drivers build config 00:02:56.134 net/mvpp2: not in enabled drivers build config 00:02:56.134 net/netvsc: not in enabled drivers build config 00:02:56.134 net/nfb: not in enabled drivers build config 00:02:56.134 net/nfp: not in enabled drivers build config 00:02:56.134 net/ngbe: not in enabled drivers build config 00:02:56.134 net/null: not in enabled drivers build config 00:02:56.134 net/octeontx: not in enabled drivers build config 00:02:56.134 net/octeon_ep: not in enabled drivers build config 00:02:56.134 net/pcap: not in enabled drivers build config 00:02:56.134 net/pfe: not in enabled drivers build config 00:02:56.134 net/qede: not in enabled drivers build config 00:02:56.134 net/ring: not in enabled drivers build config 00:02:56.134 net/sfc: not in enabled drivers build config 00:02:56.134 net/softnic: not in enabled drivers build config 00:02:56.134 net/tap: not in enabled drivers build config 00:02:56.134 net/thunderx: not in enabled drivers build config 00:02:56.134 net/txgbe: not in enabled drivers build config 00:02:56.134 net/vdev_netvsc: not in enabled drivers build config 00:02:56.134 net/vhost: not in enabled drivers build config 00:02:56.134 net/virtio: not in enabled drivers build config 00:02:56.134 net/vmxnet3: not in enabled drivers build config 00:02:56.134 raw/*: missing internal dependency, "rawdev" 00:02:56.134 crypto/armv8: not in enabled drivers build config 00:02:56.134 crypto/bcmfs: not in enabled drivers build config 00:02:56.134 crypto/caam_jr: not in enabled drivers build config 00:02:56.134 crypto/ccp: not in enabled drivers build config 00:02:56.134 crypto/cnxk: not in enabled drivers build config 00:02:56.134 crypto/dpaa_sec: not in enabled drivers build config 00:02:56.134 crypto/dpaa2_sec: not in enabled drivers build config 00:02:56.134 crypto/ipsec_mb: not in enabled drivers build config 00:02:56.134 crypto/mlx5: not in enabled drivers build config 00:02:56.134 crypto/mvsam: not in enabled drivers build config 00:02:56.134 crypto/nitrox: not in enabled drivers build config 00:02:56.134 crypto/null: not in enabled drivers build config 00:02:56.134 crypto/octeontx: not in enabled drivers build config 00:02:56.134 crypto/openssl: not in enabled drivers build config 00:02:56.134 crypto/scheduler: not in enabled drivers build config 00:02:56.134 crypto/uadk: not in enabled drivers build config 00:02:56.134 crypto/virtio: not in enabled drivers build config 00:02:56.134 compress/isal: not in enabled drivers build config 00:02:56.134 compress/mlx5: not in enabled drivers build config 00:02:56.134 compress/nitrox: not in enabled drivers build config 00:02:56.134 compress/octeontx: not in enabled drivers build config 00:02:56.134 compress/zlib: not in enabled drivers build config 00:02:56.134 regex/*: missing internal dependency, "regexdev" 00:02:56.134 ml/*: missing internal dependency, "mldev" 00:02:56.134 vdpa/ifc: not in enabled drivers build config 00:02:56.134 vdpa/mlx5: not in enabled drivers build config 00:02:56.134 vdpa/nfp: not in enabled drivers build config 00:02:56.134 vdpa/sfc: not in enabled drivers build config 00:02:56.134 event/*: missing internal dependency, "eventdev" 00:02:56.134 baseband/*: missing internal dependency, "bbdev" 00:02:56.134 gpu/*: missing internal dependency, "gpudev" 00:02:56.134 00:02:56.134 00:02:56.134 Build targets in project: 85 00:02:56.134 00:02:56.134 DPDK 24.03.0 00:02:56.134 00:02:56.134 User defined options 00:02:56.134 buildtype : debug 00:02:56.134 default_library : shared 00:02:56.134 libdir : lib 00:02:56.134 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:56.134 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:56.134 c_link_args : 00:02:56.134 cpu_instruction_set: native 00:02:56.134 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:56.134 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:56.134 enable_docs : false 00:02:56.134 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:56.134 enable_kmods : false 00:02:56.134 max_lcores : 128 00:02:56.134 tests : false 00:02:56.134 00:02:56.134 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.134 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:56.397 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:56.397 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:56.397 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:56.397 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:56.397 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:56.397 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:56.397 [7/268] Linking static target lib/librte_kvargs.a 00:02:56.397 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:56.397 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:56.397 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:56.397 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:56.397 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:56.397 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.397 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:56.397 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:56.397 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.397 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:56.397 [18/268] Linking static target lib/librte_log.a 00:02:56.397 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:56.655 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.655 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:56.655 [22/268] Linking static target lib/librte_pci.a 00:02:56.655 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:56.655 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:56.913 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:56.913 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:56.913 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:56.913 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:56.913 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:56.913 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:56.913 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:56.913 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:56.913 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:56.913 [34/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:56.914 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.914 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:56.914 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:56.914 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.914 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:56.914 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:56.914 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:56.914 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:56.914 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:56.914 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:56.914 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:56.914 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:56.914 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:56.914 [48/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.914 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:56.914 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:56.914 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:56.914 [52/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:56.914 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:56.914 [54/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.914 [55/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:56.914 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:56.914 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:56.914 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:56.914 [59/268] Linking static target lib/librte_meter.a 00:02:56.914 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:56.914 [61/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:56.914 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:56.914 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:56.914 [64/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:56.914 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:56.914 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:56.914 [67/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:56.914 [68/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:56.914 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:56.914 [70/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.914 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.914 [72/268] Linking static target lib/librte_ring.a 00:02:56.914 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:56.914 [74/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:56.914 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:56.914 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:56.914 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:56.914 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:56.914 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:56.914 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:56.914 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:56.914 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:56.914 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:56.914 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:56.914 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.914 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:56.914 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:56.914 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.914 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.914 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:56.914 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:56.914 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.914 [93/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:57.172 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.172 [95/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:57.172 [96/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:57.172 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:57.172 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.172 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:57.172 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.172 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:57.172 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.172 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:57.172 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:57.173 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:57.173 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:57.173 [107/268] Linking static target lib/librte_telemetry.a 00:02:57.173 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.173 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.173 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:57.173 [111/268] Linking static target lib/librte_net.a 00:02:57.173 [112/268] Linking static target lib/librte_mempool.a 00:02:57.173 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.173 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.173 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.173 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.173 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:57.173 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:57.173 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:57.173 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.173 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:57.173 [122/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:57.173 [123/268] Linking static target lib/librte_eal.a 00:02:57.173 [124/268] Linking static target lib/librte_rcu.a 00:02:57.173 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:57.173 [126/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.173 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:57.173 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:57.173 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:57.173 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:57.173 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:57.173 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:57.173 [133/268] Linking static target lib/librte_cmdline.a 00:02:57.173 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.173 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.173 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:57.431 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:57.431 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.431 [139/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.431 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.431 [141/268] Linking target lib/librte_log.so.24.1 00:02:57.431 [142/268] Linking static target lib/librte_timer.a 00:02:57.431 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:57.431 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:57.431 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.431 [146/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:57.431 [147/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.431 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:57.431 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.431 [150/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.431 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.431 [152/268] Linking static target lib/librte_mbuf.a 00:02:57.431 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:57.431 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:57.431 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:57.431 [156/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.431 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:57.431 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.431 [159/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:57.431 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.431 [161/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:57.431 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.432 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:57.432 [164/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:57.432 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:57.432 [166/268] Linking static target lib/librte_dmadev.a 00:02:57.432 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:57.432 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.432 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:57.432 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:57.432 [171/268] Linking static target lib/librte_power.a 00:02:57.432 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:57.432 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.432 [174/268] Linking target lib/librte_kvargs.so.24.1 00:02:57.432 [175/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.691 [176/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.691 [177/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.691 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:57.691 [179/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:57.691 [180/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.691 [181/268] Linking static target lib/librte_compressdev.a 00:02:57.691 [182/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.691 [183/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.691 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.691 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:57.691 [186/268] Linking static target lib/librte_security.a 00:02:57.691 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.691 [188/268] Linking static target lib/librte_reorder.a 00:02:57.691 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.691 [190/268] Linking target lib/librte_telemetry.so.24.1 00:02:57.691 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.691 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:57.691 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.691 [194/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.691 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.691 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:57.691 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.691 [198/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.691 [199/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.691 [200/268] Linking static target drivers/librte_bus_vdev.a 00:02:57.691 [201/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.691 [202/268] Linking static target drivers/librte_mempool_ring.a 00:02:57.951 [203/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:57.951 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:57.951 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.951 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:57.951 [207/268] Linking static target lib/librte_hash.a 00:02:57.951 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.951 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:57.951 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.951 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:57.951 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.951 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:57.951 [214/268] Linking static target lib/librte_cryptodev.a 00:02:57.951 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.211 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.211 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.211 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.211 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.211 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:58.211 [221/268] Linking static target lib/librte_ethdev.a 00:02:58.211 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.470 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.470 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.470 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.729 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.729 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.665 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.665 [229/268] Linking static target lib/librte_vhost.a 00:02:59.924 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.298 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.568 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.136 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.136 [234/268] Linking target lib/librte_eal.so.24.1 00:03:07.395 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:07.395 [236/268] Linking target lib/librte_ring.so.24.1 00:03:07.395 [237/268] Linking target lib/librte_timer.so.24.1 00:03:07.395 [238/268] Linking target lib/librte_meter.so.24.1 00:03:07.395 [239/268] Linking target lib/librte_pci.so.24.1 00:03:07.395 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:07.395 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:07.395 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:07.395 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:07.395 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:07.395 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:07.395 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:07.654 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:07.654 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:07.654 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:07.654 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:07.654 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:07.654 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:07.654 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:07.914 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:07.914 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:07.914 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:07.914 [257/268] Linking target lib/librte_net.so.24.1 00:03:07.914 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:07.914 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:07.914 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:08.173 [261/268] Linking target lib/librte_hash.so.24.1 00:03:08.173 [262/268] Linking target lib/librte_security.so.24.1 00:03:08.173 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:08.173 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:08.173 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:08.173 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:08.173 [267/268] Linking target lib/librte_power.so.24.1 00:03:08.432 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:08.432 INFO: autodetecting backend as ninja 00:03:08.432 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:20.646 CC lib/log/log.o 00:03:20.646 CC lib/log/log_deprecated.o 00:03:20.646 CC lib/log/log_flags.o 00:03:20.646 CC lib/ut_mock/mock.o 00:03:20.646 CC lib/ut/ut.o 00:03:20.646 LIB libspdk_ut.a 00:03:20.646 LIB libspdk_log.a 00:03:20.646 LIB libspdk_ut_mock.a 00:03:20.646 SO libspdk_ut.so.2.0 00:03:20.646 SO libspdk_log.so.7.1 00:03:20.646 SO libspdk_ut_mock.so.6.0 00:03:20.646 SYMLINK libspdk_ut_mock.so 00:03:20.646 SYMLINK libspdk_ut.so 00:03:20.646 SYMLINK libspdk_log.so 00:03:20.646 CXX lib/trace_parser/trace.o 00:03:20.646 CC lib/ioat/ioat.o 00:03:20.646 CC lib/dma/dma.o 00:03:20.646 CC lib/util/base64.o 00:03:20.646 CC lib/util/bit_array.o 00:03:20.646 CC lib/util/cpuset.o 00:03:20.646 CC lib/util/crc16.o 00:03:20.646 CC lib/util/crc32.o 00:03:20.646 CC lib/util/crc32c.o 00:03:20.646 CC lib/util/crc32_ieee.o 00:03:20.646 CC lib/util/crc64.o 00:03:20.646 CC lib/util/dif.o 00:03:20.646 CC lib/util/fd.o 00:03:20.646 CC lib/util/fd_group.o 00:03:20.646 CC lib/util/file.o 00:03:20.646 CC lib/util/hexlify.o 00:03:20.646 CC lib/util/iov.o 00:03:20.646 CC lib/util/math.o 00:03:20.646 CC lib/util/net.o 00:03:20.646 CC lib/util/pipe.o 00:03:20.646 CC lib/util/strerror_tls.o 00:03:20.646 CC lib/util/string.o 00:03:20.646 CC lib/util/uuid.o 00:03:20.646 CC lib/util/xor.o 00:03:20.646 CC lib/util/zipf.o 00:03:20.646 CC lib/util/md5.o 00:03:20.646 CC lib/vfio_user/host/vfio_user_pci.o 00:03:20.646 CC lib/vfio_user/host/vfio_user.o 00:03:20.646 LIB libspdk_dma.a 00:03:20.646 SO libspdk_dma.so.5.0 00:03:20.646 LIB libspdk_ioat.a 00:03:20.646 SO libspdk_ioat.so.7.0 00:03:20.646 SYMLINK libspdk_dma.so 00:03:20.646 SYMLINK libspdk_ioat.so 00:03:20.646 LIB libspdk_vfio_user.a 00:03:20.646 SO libspdk_vfio_user.so.5.0 00:03:20.646 SYMLINK libspdk_vfio_user.so 00:03:20.646 LIB libspdk_util.a 00:03:20.646 SO libspdk_util.so.10.1 00:03:20.646 SYMLINK libspdk_util.so 00:03:20.646 LIB libspdk_trace_parser.a 00:03:20.646 SO libspdk_trace_parser.so.6.0 00:03:20.646 SYMLINK libspdk_trace_parser.so 00:03:20.646 CC lib/env_dpdk/env.o 00:03:20.646 CC lib/env_dpdk/memory.o 00:03:20.646 CC lib/conf/conf.o 00:03:20.646 CC lib/env_dpdk/pci.o 00:03:20.646 CC lib/json/json_parse.o 00:03:20.646 CC lib/json/json_util.o 00:03:20.646 CC lib/env_dpdk/init.o 00:03:20.646 CC lib/json/json_write.o 00:03:20.646 CC lib/env_dpdk/threads.o 00:03:20.646 CC lib/env_dpdk/pci_ioat.o 00:03:20.646 CC lib/vmd/vmd.o 00:03:20.646 CC lib/env_dpdk/pci_virtio.o 00:03:20.646 CC lib/vmd/led.o 00:03:20.646 CC lib/env_dpdk/pci_vmd.o 00:03:20.646 CC lib/rdma_utils/rdma_utils.o 00:03:20.646 CC lib/env_dpdk/pci_idxd.o 00:03:20.646 CC lib/env_dpdk/pci_event.o 00:03:20.646 CC lib/idxd/idxd.o 00:03:20.646 CC lib/env_dpdk/sigbus_handler.o 00:03:20.646 CC lib/idxd/idxd_user.o 00:03:20.646 CC lib/idxd/idxd_kernel.o 00:03:20.646 CC lib/env_dpdk/pci_dpdk.o 00:03:20.646 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:20.646 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:20.646 LIB libspdk_conf.a 00:03:20.646 SO libspdk_conf.so.6.0 00:03:20.646 LIB libspdk_rdma_utils.a 00:03:20.646 SO libspdk_rdma_utils.so.1.0 00:03:20.646 LIB libspdk_json.a 00:03:20.646 SYMLINK libspdk_conf.so 00:03:20.646 SO libspdk_json.so.6.0 00:03:20.646 SYMLINK libspdk_rdma_utils.so 00:03:20.646 SYMLINK libspdk_json.so 00:03:20.905 LIB libspdk_idxd.a 00:03:20.905 LIB libspdk_vmd.a 00:03:20.905 SO libspdk_idxd.so.12.1 00:03:20.905 SO libspdk_vmd.so.6.0 00:03:20.905 SYMLINK libspdk_idxd.so 00:03:20.905 SYMLINK libspdk_vmd.so 00:03:20.905 CC lib/rdma_provider/common.o 00:03:20.905 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:20.905 CC lib/jsonrpc/jsonrpc_server.o 00:03:20.905 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:20.905 CC lib/jsonrpc/jsonrpc_client.o 00:03:20.905 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:21.163 LIB libspdk_rdma_provider.a 00:03:21.163 SO libspdk_rdma_provider.so.7.0 00:03:21.163 LIB libspdk_jsonrpc.a 00:03:21.163 SO libspdk_jsonrpc.so.6.0 00:03:21.163 SYMLINK libspdk_rdma_provider.so 00:03:21.422 SYMLINK libspdk_jsonrpc.so 00:03:21.422 LIB libspdk_env_dpdk.a 00:03:21.422 SO libspdk_env_dpdk.so.15.1 00:03:21.422 SYMLINK libspdk_env_dpdk.so 00:03:21.681 CC lib/rpc/rpc.o 00:03:21.681 LIB libspdk_rpc.a 00:03:21.941 SO libspdk_rpc.so.6.0 00:03:21.941 SYMLINK libspdk_rpc.so 00:03:22.200 CC lib/notify/notify.o 00:03:22.200 CC lib/notify/notify_rpc.o 00:03:22.200 CC lib/trace/trace.o 00:03:22.200 CC lib/trace/trace_flags.o 00:03:22.200 CC lib/keyring/keyring.o 00:03:22.200 CC lib/trace/trace_rpc.o 00:03:22.200 CC lib/keyring/keyring_rpc.o 00:03:22.460 LIB libspdk_notify.a 00:03:22.460 SO libspdk_notify.so.6.0 00:03:22.460 LIB libspdk_keyring.a 00:03:22.460 LIB libspdk_trace.a 00:03:22.460 SO libspdk_keyring.so.2.0 00:03:22.460 SYMLINK libspdk_notify.so 00:03:22.460 SO libspdk_trace.so.11.0 00:03:22.460 SYMLINK libspdk_keyring.so 00:03:22.460 SYMLINK libspdk_trace.so 00:03:23.029 CC lib/thread/thread.o 00:03:23.029 CC lib/sock/sock.o 00:03:23.029 CC lib/thread/iobuf.o 00:03:23.029 CC lib/sock/sock_rpc.o 00:03:23.288 LIB libspdk_sock.a 00:03:23.288 SO libspdk_sock.so.10.0 00:03:23.288 SYMLINK libspdk_sock.so 00:03:23.547 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:23.547 CC lib/nvme/nvme_ctrlr.o 00:03:23.547 CC lib/nvme/nvme_fabric.o 00:03:23.547 CC lib/nvme/nvme_ns_cmd.o 00:03:23.547 CC lib/nvme/nvme_ns.o 00:03:23.547 CC lib/nvme/nvme_pcie_common.o 00:03:23.547 CC lib/nvme/nvme_pcie.o 00:03:23.547 CC lib/nvme/nvme_qpair.o 00:03:23.547 CC lib/nvme/nvme.o 00:03:23.547 CC lib/nvme/nvme_quirks.o 00:03:23.547 CC lib/nvme/nvme_transport.o 00:03:23.547 CC lib/nvme/nvme_discovery.o 00:03:23.547 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:23.547 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:23.547 CC lib/nvme/nvme_tcp.o 00:03:23.547 CC lib/nvme/nvme_opal.o 00:03:23.547 CC lib/nvme/nvme_io_msg.o 00:03:23.547 CC lib/nvme/nvme_poll_group.o 00:03:23.547 CC lib/nvme/nvme_zns.o 00:03:23.547 CC lib/nvme/nvme_stubs.o 00:03:23.547 CC lib/nvme/nvme_auth.o 00:03:23.547 CC lib/nvme/nvme_cuse.o 00:03:23.547 CC lib/nvme/nvme_vfio_user.o 00:03:23.547 CC lib/nvme/nvme_rdma.o 00:03:24.115 LIB libspdk_thread.a 00:03:24.115 SO libspdk_thread.so.11.0 00:03:24.115 SYMLINK libspdk_thread.so 00:03:24.372 CC lib/accel/accel_sw.o 00:03:24.372 CC lib/accel/accel.o 00:03:24.372 CC lib/accel/accel_rpc.o 00:03:24.372 CC lib/init/json_config.o 00:03:24.372 CC lib/init/subsystem.o 00:03:24.372 CC lib/init/subsystem_rpc.o 00:03:24.372 CC lib/blob/blobstore.o 00:03:24.372 CC lib/init/rpc.o 00:03:24.372 CC lib/fsdev/fsdev.o 00:03:24.372 CC lib/blob/request.o 00:03:24.372 CC lib/fsdev/fsdev_io.o 00:03:24.372 CC lib/blob/zeroes.o 00:03:24.372 CC lib/fsdev/fsdev_rpc.o 00:03:24.372 CC lib/blob/blob_bs_dev.o 00:03:24.372 CC lib/virtio/virtio.o 00:03:24.372 CC lib/virtio/virtio_vhost_user.o 00:03:24.372 CC lib/virtio/virtio_vfio_user.o 00:03:24.372 CC lib/virtio/virtio_pci.o 00:03:24.372 CC lib/vfu_tgt/tgt_endpoint.o 00:03:24.372 CC lib/vfu_tgt/tgt_rpc.o 00:03:24.631 LIB libspdk_init.a 00:03:24.631 SO libspdk_init.so.6.0 00:03:24.631 LIB libspdk_virtio.a 00:03:24.631 LIB libspdk_vfu_tgt.a 00:03:24.631 SYMLINK libspdk_init.so 00:03:24.631 SO libspdk_virtio.so.7.0 00:03:24.631 SO libspdk_vfu_tgt.so.3.0 00:03:24.631 SYMLINK libspdk_vfu_tgt.so 00:03:24.631 SYMLINK libspdk_virtio.so 00:03:24.890 LIB libspdk_fsdev.a 00:03:24.890 SO libspdk_fsdev.so.2.0 00:03:24.890 CC lib/event/app.o 00:03:24.890 CC lib/event/reactor.o 00:03:24.890 CC lib/event/log_rpc.o 00:03:24.890 CC lib/event/app_rpc.o 00:03:24.890 CC lib/event/scheduler_static.o 00:03:24.890 SYMLINK libspdk_fsdev.so 00:03:25.149 LIB libspdk_accel.a 00:03:25.149 SO libspdk_accel.so.16.0 00:03:25.149 SYMLINK libspdk_accel.so 00:03:25.149 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:25.409 LIB libspdk_event.a 00:03:25.409 LIB libspdk_nvme.a 00:03:25.409 SO libspdk_event.so.14.0 00:03:25.409 SYMLINK libspdk_event.so 00:03:25.409 SO libspdk_nvme.so.15.0 00:03:25.668 CC lib/bdev/bdev.o 00:03:25.668 CC lib/bdev/bdev_rpc.o 00:03:25.668 CC lib/bdev/bdev_zone.o 00:03:25.668 CC lib/bdev/part.o 00:03:25.668 CC lib/bdev/scsi_nvme.o 00:03:25.668 SYMLINK libspdk_nvme.so 00:03:25.668 LIB libspdk_fuse_dispatcher.a 00:03:25.668 SO libspdk_fuse_dispatcher.so.1.0 00:03:25.927 SYMLINK libspdk_fuse_dispatcher.so 00:03:26.495 LIB libspdk_blob.a 00:03:26.495 SO libspdk_blob.so.12.0 00:03:26.495 SYMLINK libspdk_blob.so 00:03:27.063 CC lib/blobfs/blobfs.o 00:03:27.063 CC lib/lvol/lvol.o 00:03:27.063 CC lib/blobfs/tree.o 00:03:27.322 LIB libspdk_bdev.a 00:03:27.322 SO libspdk_bdev.so.17.0 00:03:27.581 LIB libspdk_blobfs.a 00:03:27.581 SYMLINK libspdk_bdev.so 00:03:27.581 SO libspdk_blobfs.so.11.0 00:03:27.581 LIB libspdk_lvol.a 00:03:27.581 SYMLINK libspdk_blobfs.so 00:03:27.581 SO libspdk_lvol.so.11.0 00:03:27.581 SYMLINK libspdk_lvol.so 00:03:27.841 CC lib/nbd/nbd.o 00:03:27.841 CC lib/nbd/nbd_rpc.o 00:03:27.841 CC lib/ublk/ublk.o 00:03:27.841 CC lib/ublk/ublk_rpc.o 00:03:27.841 CC lib/scsi/dev.o 00:03:27.841 CC lib/ftl/ftl_core.o 00:03:27.841 CC lib/nvmf/ctrlr.o 00:03:27.841 CC lib/scsi/lun.o 00:03:27.841 CC lib/scsi/port.o 00:03:27.841 CC lib/ftl/ftl_init.o 00:03:27.841 CC lib/nvmf/ctrlr_discovery.o 00:03:27.841 CC lib/nvmf/ctrlr_bdev.o 00:03:27.841 CC lib/ftl/ftl_layout.o 00:03:27.841 CC lib/scsi/scsi.o 00:03:27.841 CC lib/nvmf/subsystem.o 00:03:27.841 CC lib/scsi/scsi_bdev.o 00:03:27.841 CC lib/ftl/ftl_debug.o 00:03:27.841 CC lib/nvmf/nvmf.o 00:03:27.841 CC lib/ftl/ftl_io.o 00:03:27.841 CC lib/scsi/scsi_pr.o 00:03:27.841 CC lib/ftl/ftl_sb.o 00:03:27.841 CC lib/nvmf/nvmf_rpc.o 00:03:27.841 CC lib/scsi/scsi_rpc.o 00:03:27.841 CC lib/ftl/ftl_l2p.o 00:03:27.841 CC lib/scsi/task.o 00:03:27.841 CC lib/nvmf/transport.o 00:03:27.841 CC lib/ftl/ftl_l2p_flat.o 00:03:27.841 CC lib/nvmf/tcp.o 00:03:27.841 CC lib/ftl/ftl_nv_cache.o 00:03:27.841 CC lib/nvmf/stubs.o 00:03:27.841 CC lib/ftl/ftl_band.o 00:03:27.841 CC lib/nvmf/mdns_server.o 00:03:27.841 CC lib/ftl/ftl_band_ops.o 00:03:27.841 CC lib/nvmf/vfio_user.o 00:03:27.841 CC lib/ftl/ftl_writer.o 00:03:27.841 CC lib/nvmf/rdma.o 00:03:27.841 CC lib/nvmf/auth.o 00:03:27.841 CC lib/ftl/ftl_rq.o 00:03:27.841 CC lib/ftl/ftl_reloc.o 00:03:27.841 CC lib/ftl/ftl_l2p_cache.o 00:03:27.841 CC lib/ftl/ftl_p2l_log.o 00:03:27.841 CC lib/ftl/ftl_p2l.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:27.841 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:27.841 CC lib/ftl/utils/ftl_conf.o 00:03:27.841 CC lib/ftl/utils/ftl_md.o 00:03:27.841 CC lib/ftl/utils/ftl_bitmap.o 00:03:27.841 CC lib/ftl/utils/ftl_mempool.o 00:03:27.841 CC lib/ftl/utils/ftl_property.o 00:03:27.841 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:27.841 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:27.841 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:27.841 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:27.841 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:27.841 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:27.841 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:27.841 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:27.841 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:27.841 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:27.841 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:27.841 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:27.841 CC lib/ftl/base/ftl_base_dev.o 00:03:27.841 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:27.841 CC lib/ftl/ftl_trace.o 00:03:27.841 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.408 LIB libspdk_nbd.a 00:03:28.408 SO libspdk_nbd.so.7.0 00:03:28.667 LIB libspdk_scsi.a 00:03:28.667 LIB libspdk_ublk.a 00:03:28.667 SYMLINK libspdk_nbd.so 00:03:28.667 SO libspdk_scsi.so.9.0 00:03:28.667 SO libspdk_ublk.so.3.0 00:03:28.667 SYMLINK libspdk_scsi.so 00:03:28.667 SYMLINK libspdk_ublk.so 00:03:28.926 LIB libspdk_ftl.a 00:03:28.926 CC lib/iscsi/conn.o 00:03:28.926 CC lib/iscsi/init_grp.o 00:03:28.926 CC lib/iscsi/iscsi.o 00:03:28.926 CC lib/iscsi/param.o 00:03:28.926 CC lib/iscsi/portal_grp.o 00:03:28.926 CC lib/iscsi/tgt_node.o 00:03:28.926 CC lib/iscsi/iscsi_subsystem.o 00:03:28.926 CC lib/iscsi/iscsi_rpc.o 00:03:28.926 CC lib/vhost/vhost.o 00:03:28.926 CC lib/vhost/vhost_rpc.o 00:03:28.926 CC lib/iscsi/task.o 00:03:28.926 CC lib/vhost/vhost_scsi.o 00:03:28.926 CC lib/vhost/vhost_blk.o 00:03:28.926 CC lib/vhost/rte_vhost_user.o 00:03:28.926 SO libspdk_ftl.so.9.0 00:03:29.185 SYMLINK libspdk_ftl.so 00:03:29.753 LIB libspdk_nvmf.a 00:03:29.753 SO libspdk_nvmf.so.20.0 00:03:29.753 LIB libspdk_vhost.a 00:03:29.753 SO libspdk_vhost.so.8.0 00:03:29.753 SYMLINK libspdk_nvmf.so 00:03:30.041 SYMLINK libspdk_vhost.so 00:03:30.042 LIB libspdk_iscsi.a 00:03:30.042 SO libspdk_iscsi.so.8.0 00:03:30.042 SYMLINK libspdk_iscsi.so 00:03:30.608 CC module/env_dpdk/env_dpdk_rpc.o 00:03:30.608 CC module/vfu_device/vfu_virtio.o 00:03:30.608 CC module/vfu_device/vfu_virtio_blk.o 00:03:30.608 CC module/vfu_device/vfu_virtio_rpc.o 00:03:30.608 CC module/vfu_device/vfu_virtio_scsi.o 00:03:30.608 CC module/vfu_device/vfu_virtio_fs.o 00:03:30.866 CC module/accel/ioat/accel_ioat.o 00:03:30.866 CC module/accel/ioat/accel_ioat_rpc.o 00:03:30.866 CC module/accel/iaa/accel_iaa.o 00:03:30.866 CC module/keyring/file/keyring.o 00:03:30.866 CC module/accel/iaa/accel_iaa_rpc.o 00:03:30.866 CC module/keyring/file/keyring_rpc.o 00:03:30.866 CC module/blob/bdev/blob_bdev.o 00:03:30.866 LIB libspdk_env_dpdk_rpc.a 00:03:30.866 CC module/accel/error/accel_error.o 00:03:30.866 CC module/accel/error/accel_error_rpc.o 00:03:30.866 CC module/accel/dsa/accel_dsa.o 00:03:30.866 CC module/accel/dsa/accel_dsa_rpc.o 00:03:30.866 CC module/scheduler/gscheduler/gscheduler.o 00:03:30.866 CC module/sock/posix/posix.o 00:03:30.866 CC module/fsdev/aio/fsdev_aio.o 00:03:30.866 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:30.866 CC module/fsdev/aio/linux_aio_mgr.o 00:03:30.866 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:30.866 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:30.866 CC module/keyring/linux/keyring.o 00:03:30.866 CC module/keyring/linux/keyring_rpc.o 00:03:30.866 SO libspdk_env_dpdk_rpc.so.6.0 00:03:30.866 SYMLINK libspdk_env_dpdk_rpc.so 00:03:30.866 LIB libspdk_keyring_file.a 00:03:30.866 LIB libspdk_keyring_linux.a 00:03:30.866 LIB libspdk_accel_ioat.a 00:03:30.866 LIB libspdk_scheduler_gscheduler.a 00:03:31.125 SO libspdk_keyring_file.so.2.0 00:03:31.125 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.125 SO libspdk_keyring_linux.so.1.0 00:03:31.125 LIB libspdk_accel_iaa.a 00:03:31.125 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.125 SO libspdk_accel_ioat.so.6.0 00:03:31.125 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.125 LIB libspdk_accel_error.a 00:03:31.125 LIB libspdk_scheduler_dynamic.a 00:03:31.125 SO libspdk_accel_iaa.so.3.0 00:03:31.125 SYMLINK libspdk_keyring_file.so 00:03:31.125 SO libspdk_accel_error.so.2.0 00:03:31.125 SO libspdk_scheduler_dynamic.so.4.0 00:03:31.125 SYMLINK libspdk_keyring_linux.so 00:03:31.125 LIB libspdk_blob_bdev.a 00:03:31.125 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.125 SYMLINK libspdk_accel_ioat.so 00:03:31.125 LIB libspdk_accel_dsa.a 00:03:31.125 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.125 SO libspdk_blob_bdev.so.12.0 00:03:31.125 SYMLINK libspdk_accel_iaa.so 00:03:31.125 SO libspdk_accel_dsa.so.5.0 00:03:31.125 SYMLINK libspdk_accel_error.so 00:03:31.125 SYMLINK libspdk_scheduler_dynamic.so 00:03:31.125 SYMLINK libspdk_blob_bdev.so 00:03:31.125 SYMLINK libspdk_accel_dsa.so 00:03:31.125 LIB libspdk_vfu_device.a 00:03:31.125 SO libspdk_vfu_device.so.3.0 00:03:31.384 SYMLINK libspdk_vfu_device.so 00:03:31.384 LIB libspdk_fsdev_aio.a 00:03:31.384 SO libspdk_fsdev_aio.so.1.0 00:03:31.384 LIB libspdk_sock_posix.a 00:03:31.384 SO libspdk_sock_posix.so.6.0 00:03:31.384 SYMLINK libspdk_fsdev_aio.so 00:03:31.643 SYMLINK libspdk_sock_posix.so 00:03:31.643 CC module/bdev/malloc/bdev_malloc.o 00:03:31.643 CC module/bdev/delay/vbdev_delay.o 00:03:31.643 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.643 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.643 CC module/bdev/error/vbdev_error.o 00:03:31.643 CC module/bdev/lvol/vbdev_lvol.o 00:03:31.643 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.643 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.643 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.643 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.643 CC module/bdev/raid/bdev_raid.o 00:03:31.643 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:31.643 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:31.643 CC module/bdev/null/bdev_null.o 00:03:31.643 CC module/bdev/raid/bdev_raid_rpc.o 00:03:31.643 CC module/bdev/nvme/bdev_nvme.o 00:03:31.643 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:31.643 CC module/bdev/raid/bdev_raid_sb.o 00:03:31.643 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:31.643 CC module/bdev/null/bdev_null_rpc.o 00:03:31.643 CC module/bdev/aio/bdev_aio.o 00:03:31.643 CC module/bdev/raid/raid0.o 00:03:31.643 CC module/bdev/gpt/gpt.o 00:03:31.643 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.643 CC module/bdev/nvme/bdev_mdns_client.o 00:03:31.643 CC module/bdev/raid/concat.o 00:03:31.643 CC module/bdev/raid/raid1.o 00:03:31.643 CC module/bdev/nvme/nvme_rpc.o 00:03:31.643 CC module/bdev/split/vbdev_split.o 00:03:31.643 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.643 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:31.643 CC module/bdev/nvme/vbdev_opal.o 00:03:31.643 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:31.643 CC module/bdev/split/vbdev_split_rpc.o 00:03:31.643 CC module/bdev/passthru/vbdev_passthru.o 00:03:31.643 CC module/bdev/ftl/bdev_ftl.o 00:03:31.643 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:31.643 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:31.643 CC module/bdev/iscsi/bdev_iscsi.o 00:03:31.643 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:31.643 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:31.643 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:31.902 LIB libspdk_blobfs_bdev.a 00:03:31.902 SO libspdk_blobfs_bdev.so.6.0 00:03:31.902 LIB libspdk_bdev_null.a 00:03:31.902 LIB libspdk_bdev_error.a 00:03:31.902 LIB libspdk_bdev_split.a 00:03:31.902 SO libspdk_bdev_null.so.6.0 00:03:31.902 LIB libspdk_bdev_ftl.a 00:03:31.902 SYMLINK libspdk_blobfs_bdev.so 00:03:31.902 LIB libspdk_bdev_gpt.a 00:03:31.902 SO libspdk_bdev_split.so.6.0 00:03:31.902 SO libspdk_bdev_error.so.6.0 00:03:31.902 LIB libspdk_bdev_passthru.a 00:03:31.902 LIB libspdk_bdev_aio.a 00:03:31.902 SO libspdk_bdev_ftl.so.6.0 00:03:31.902 SO libspdk_bdev_gpt.so.6.0 00:03:32.161 LIB libspdk_bdev_zone_block.a 00:03:32.161 SYMLINK libspdk_bdev_null.so 00:03:32.161 SO libspdk_bdev_passthru.so.6.0 00:03:32.161 SYMLINK libspdk_bdev_error.so 00:03:32.161 SO libspdk_bdev_aio.so.6.0 00:03:32.161 LIB libspdk_bdev_iscsi.a 00:03:32.161 SYMLINK libspdk_bdev_split.so 00:03:32.161 LIB libspdk_bdev_delay.a 00:03:32.161 LIB libspdk_bdev_malloc.a 00:03:32.161 SO libspdk_bdev_zone_block.so.6.0 00:03:32.161 SO libspdk_bdev_iscsi.so.6.0 00:03:32.161 SYMLINK libspdk_bdev_gpt.so 00:03:32.161 SYMLINK libspdk_bdev_ftl.so 00:03:32.161 SYMLINK libspdk_bdev_passthru.so 00:03:32.161 SO libspdk_bdev_delay.so.6.0 00:03:32.161 SO libspdk_bdev_malloc.so.6.0 00:03:32.161 SYMLINK libspdk_bdev_aio.so 00:03:32.161 SYMLINK libspdk_bdev_zone_block.so 00:03:32.161 SYMLINK libspdk_bdev_iscsi.so 00:03:32.161 LIB libspdk_bdev_virtio.a 00:03:32.161 SYMLINK libspdk_bdev_delay.so 00:03:32.161 SYMLINK libspdk_bdev_malloc.so 00:03:32.161 LIB libspdk_bdev_lvol.a 00:03:32.161 SO libspdk_bdev_virtio.so.6.0 00:03:32.161 SO libspdk_bdev_lvol.so.6.0 00:03:32.161 SYMLINK libspdk_bdev_virtio.so 00:03:32.161 SYMLINK libspdk_bdev_lvol.so 00:03:32.419 LIB libspdk_bdev_raid.a 00:03:32.419 SO libspdk_bdev_raid.so.6.0 00:03:32.678 SYMLINK libspdk_bdev_raid.so 00:03:33.615 LIB libspdk_bdev_nvme.a 00:03:33.615 SO libspdk_bdev_nvme.so.7.1 00:03:33.615 SYMLINK libspdk_bdev_nvme.so 00:03:34.185 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.185 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.185 CC module/event/subsystems/keyring/keyring.o 00:03:34.185 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.443 CC module/event/subsystems/sock/sock.o 00:03:34.444 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.444 CC module/event/subsystems/vmd/vmd.o 00:03:34.444 CC module/event/subsystems/fsdev/fsdev.o 00:03:34.444 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.444 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:34.444 LIB libspdk_event_vhost_blk.a 00:03:34.444 LIB libspdk_event_keyring.a 00:03:34.444 LIB libspdk_event_sock.a 00:03:34.444 LIB libspdk_event_iobuf.a 00:03:34.444 LIB libspdk_event_vmd.a 00:03:34.444 LIB libspdk_event_fsdev.a 00:03:34.444 LIB libspdk_event_scheduler.a 00:03:34.444 LIB libspdk_event_vfu_tgt.a 00:03:34.444 SO libspdk_event_vhost_blk.so.3.0 00:03:34.444 SO libspdk_event_iobuf.so.3.0 00:03:34.444 SO libspdk_event_keyring.so.1.0 00:03:34.444 SO libspdk_event_sock.so.5.0 00:03:34.444 SO libspdk_event_scheduler.so.4.0 00:03:34.444 SO libspdk_event_vmd.so.6.0 00:03:34.444 SO libspdk_event_fsdev.so.1.0 00:03:34.444 SO libspdk_event_vfu_tgt.so.3.0 00:03:34.444 SYMLINK libspdk_event_vhost_blk.so 00:03:34.444 SYMLINK libspdk_event_keyring.so 00:03:34.444 SYMLINK libspdk_event_sock.so 00:03:34.444 SYMLINK libspdk_event_iobuf.so 00:03:34.444 SYMLINK libspdk_event_fsdev.so 00:03:34.444 SYMLINK libspdk_event_scheduler.so 00:03:34.444 SYMLINK libspdk_event_vmd.so 00:03:34.703 SYMLINK libspdk_event_vfu_tgt.so 00:03:34.961 CC module/event/subsystems/accel/accel.o 00:03:34.961 LIB libspdk_event_accel.a 00:03:34.961 SO libspdk_event_accel.so.6.0 00:03:35.221 SYMLINK libspdk_event_accel.so 00:03:35.480 CC module/event/subsystems/bdev/bdev.o 00:03:35.480 LIB libspdk_event_bdev.a 00:03:35.480 SO libspdk_event_bdev.so.6.0 00:03:35.740 SYMLINK libspdk_event_bdev.so 00:03:36.000 CC module/event/subsystems/scsi/scsi.o 00:03:36.000 CC module/event/subsystems/ublk/ublk.o 00:03:36.000 CC module/event/subsystems/nbd/nbd.o 00:03:36.000 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.000 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.000 LIB libspdk_event_ublk.a 00:03:36.000 LIB libspdk_event_scsi.a 00:03:36.000 LIB libspdk_event_nbd.a 00:03:36.259 SO libspdk_event_ublk.so.3.0 00:03:36.259 SO libspdk_event_scsi.so.6.0 00:03:36.260 SO libspdk_event_nbd.so.6.0 00:03:36.260 LIB libspdk_event_nvmf.a 00:03:36.260 SYMLINK libspdk_event_ublk.so 00:03:36.260 SYMLINK libspdk_event_scsi.so 00:03:36.260 SYMLINK libspdk_event_nbd.so 00:03:36.260 SO libspdk_event_nvmf.so.6.0 00:03:36.260 SYMLINK libspdk_event_nvmf.so 00:03:36.523 CC module/event/subsystems/iscsi/iscsi.o 00:03:36.523 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:36.782 LIB libspdk_event_vhost_scsi.a 00:03:36.782 LIB libspdk_event_iscsi.a 00:03:36.782 SO libspdk_event_vhost_scsi.so.3.0 00:03:36.782 SO libspdk_event_iscsi.so.6.0 00:03:36.782 SYMLINK libspdk_event_vhost_scsi.so 00:03:36.782 SYMLINK libspdk_event_iscsi.so 00:03:37.041 SO libspdk.so.6.0 00:03:37.041 SYMLINK libspdk.so 00:03:37.301 CC app/trace_record/trace_record.o 00:03:37.301 CXX app/trace/trace.o 00:03:37.301 CC app/spdk_nvme_discover/discovery_aer.o 00:03:37.301 CC test/rpc_client/rpc_client_test.o 00:03:37.301 CC app/spdk_nvme_identify/identify.o 00:03:37.301 CC app/spdk_lspci/spdk_lspci.o 00:03:37.301 CC app/spdk_top/spdk_top.o 00:03:37.301 TEST_HEADER include/spdk/accel.h 00:03:37.301 TEST_HEADER include/spdk/accel_module.h 00:03:37.301 TEST_HEADER include/spdk/assert.h 00:03:37.301 TEST_HEADER include/spdk/barrier.h 00:03:37.301 TEST_HEADER include/spdk/bdev_module.h 00:03:37.301 TEST_HEADER include/spdk/base64.h 00:03:37.301 CC app/spdk_nvme_perf/perf.o 00:03:37.301 TEST_HEADER include/spdk/bdev.h 00:03:37.301 TEST_HEADER include/spdk/bit_array.h 00:03:37.301 TEST_HEADER include/spdk/bdev_zone.h 00:03:37.301 TEST_HEADER include/spdk/bit_pool.h 00:03:37.301 TEST_HEADER include/spdk/blob_bdev.h 00:03:37.301 TEST_HEADER include/spdk/blob.h 00:03:37.301 TEST_HEADER include/spdk/blobfs.h 00:03:37.301 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:37.301 TEST_HEADER include/spdk/conf.h 00:03:37.301 TEST_HEADER include/spdk/config.h 00:03:37.301 TEST_HEADER include/spdk/crc16.h 00:03:37.301 TEST_HEADER include/spdk/cpuset.h 00:03:37.301 TEST_HEADER include/spdk/crc32.h 00:03:37.301 TEST_HEADER include/spdk/crc64.h 00:03:37.301 TEST_HEADER include/spdk/dif.h 00:03:37.301 TEST_HEADER include/spdk/env_dpdk.h 00:03:37.301 TEST_HEADER include/spdk/dma.h 00:03:37.301 TEST_HEADER include/spdk/endian.h 00:03:37.301 TEST_HEADER include/spdk/event.h 00:03:37.301 TEST_HEADER include/spdk/fd_group.h 00:03:37.301 TEST_HEADER include/spdk/env.h 00:03:37.301 TEST_HEADER include/spdk/file.h 00:03:37.301 TEST_HEADER include/spdk/fd.h 00:03:37.301 TEST_HEADER include/spdk/fsdev.h 00:03:37.301 TEST_HEADER include/spdk/fsdev_module.h 00:03:37.301 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:37.301 TEST_HEADER include/spdk/ftl.h 00:03:37.301 CC app/spdk_dd/spdk_dd.o 00:03:37.301 TEST_HEADER include/spdk/hexlify.h 00:03:37.301 TEST_HEADER include/spdk/idxd.h 00:03:37.301 TEST_HEADER include/spdk/gpt_spec.h 00:03:37.301 TEST_HEADER include/spdk/histogram_data.h 00:03:37.301 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.301 TEST_HEADER include/spdk/init.h 00:03:37.301 TEST_HEADER include/spdk/ioat.h 00:03:37.301 TEST_HEADER include/spdk/idxd_spec.h 00:03:37.301 TEST_HEADER include/spdk/ioat_spec.h 00:03:37.301 TEST_HEADER include/spdk/json.h 00:03:37.301 TEST_HEADER include/spdk/jsonrpc.h 00:03:37.301 TEST_HEADER include/spdk/iscsi_spec.h 00:03:37.301 TEST_HEADER include/spdk/keyring_module.h 00:03:37.301 TEST_HEADER include/spdk/likely.h 00:03:37.301 TEST_HEADER include/spdk/keyring.h 00:03:37.301 TEST_HEADER include/spdk/log.h 00:03:37.301 TEST_HEADER include/spdk/lvol.h 00:03:37.301 TEST_HEADER include/spdk/md5.h 00:03:37.301 TEST_HEADER include/spdk/memory.h 00:03:37.301 TEST_HEADER include/spdk/mmio.h 00:03:37.301 TEST_HEADER include/spdk/nbd.h 00:03:37.301 TEST_HEADER include/spdk/net.h 00:03:37.301 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.301 TEST_HEADER include/spdk/nvme.h 00:03:37.301 TEST_HEADER include/spdk/notify.h 00:03:37.301 TEST_HEADER include/spdk/nvme_intel.h 00:03:37.301 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:37.301 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:37.301 TEST_HEADER include/spdk/nvme_spec.h 00:03:37.301 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:37.301 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:37.301 TEST_HEADER include/spdk/nvme_zns.h 00:03:37.301 TEST_HEADER include/spdk/nvmf.h 00:03:37.301 TEST_HEADER include/spdk/nvmf_spec.h 00:03:37.301 TEST_HEADER include/spdk/opal.h 00:03:37.301 TEST_HEADER include/spdk/opal_spec.h 00:03:37.301 TEST_HEADER include/spdk/nvmf_transport.h 00:03:37.301 TEST_HEADER include/spdk/pci_ids.h 00:03:37.301 TEST_HEADER include/spdk/pipe.h 00:03:37.301 TEST_HEADER include/spdk/queue.h 00:03:37.301 CC app/nvmf_tgt/nvmf_main.o 00:03:37.301 TEST_HEADER include/spdk/reduce.h 00:03:37.301 TEST_HEADER include/spdk/scsi.h 00:03:37.301 TEST_HEADER include/spdk/scheduler.h 00:03:37.301 TEST_HEADER include/spdk/rpc.h 00:03:37.301 TEST_HEADER include/spdk/sock.h 00:03:37.301 TEST_HEADER include/spdk/stdinc.h 00:03:37.301 TEST_HEADER include/spdk/scsi_spec.h 00:03:37.301 TEST_HEADER include/spdk/trace_parser.h 00:03:37.301 TEST_HEADER include/spdk/string.h 00:03:37.301 TEST_HEADER include/spdk/trace.h 00:03:37.301 TEST_HEADER include/spdk/thread.h 00:03:37.301 TEST_HEADER include/spdk/tree.h 00:03:37.301 TEST_HEADER include/spdk/ublk.h 00:03:37.301 TEST_HEADER include/spdk/uuid.h 00:03:37.301 TEST_HEADER include/spdk/util.h 00:03:37.301 TEST_HEADER include/spdk/version.h 00:03:37.301 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:37.301 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:37.301 TEST_HEADER include/spdk/vmd.h 00:03:37.301 TEST_HEADER include/spdk/xor.h 00:03:37.301 TEST_HEADER include/spdk/zipf.h 00:03:37.301 CXX test/cpp_headers/accel.o 00:03:37.301 CXX test/cpp_headers/accel_module.o 00:03:37.302 TEST_HEADER include/spdk/vhost.h 00:03:37.302 CXX test/cpp_headers/assert.o 00:03:37.302 CXX test/cpp_headers/barrier.o 00:03:37.302 CXX test/cpp_headers/base64.o 00:03:37.302 CXX test/cpp_headers/bdev.o 00:03:37.302 CXX test/cpp_headers/bdev_module.o 00:03:37.302 CC app/spdk_tgt/spdk_tgt.o 00:03:37.302 CXX test/cpp_headers/bit_array.o 00:03:37.302 CXX test/cpp_headers/bdev_zone.o 00:03:37.302 CXX test/cpp_headers/blobfs_bdev.o 00:03:37.302 CXX test/cpp_headers/bit_pool.o 00:03:37.302 CXX test/cpp_headers/blob_bdev.o 00:03:37.302 CXX test/cpp_headers/blob.o 00:03:37.302 CXX test/cpp_headers/blobfs.o 00:03:37.302 CXX test/cpp_headers/conf.o 00:03:37.302 CXX test/cpp_headers/config.o 00:03:37.302 CXX test/cpp_headers/crc32.o 00:03:37.302 CXX test/cpp_headers/cpuset.o 00:03:37.302 CXX test/cpp_headers/dif.o 00:03:37.302 CXX test/cpp_headers/crc64.o 00:03:37.302 CXX test/cpp_headers/crc16.o 00:03:37.302 CXX test/cpp_headers/endian.o 00:03:37.302 CXX test/cpp_headers/dma.o 00:03:37.302 CXX test/cpp_headers/env_dpdk.o 00:03:37.302 CXX test/cpp_headers/env.o 00:03:37.302 CXX test/cpp_headers/fd_group.o 00:03:37.302 CXX test/cpp_headers/event.o 00:03:37.302 CXX test/cpp_headers/file.o 00:03:37.302 CXX test/cpp_headers/fd.o 00:03:37.302 CXX test/cpp_headers/fsdev.o 00:03:37.302 CXX test/cpp_headers/fsdev_module.o 00:03:37.302 CXX test/cpp_headers/ftl.o 00:03:37.302 CXX test/cpp_headers/fuse_dispatcher.o 00:03:37.302 CXX test/cpp_headers/gpt_spec.o 00:03:37.302 CXX test/cpp_headers/histogram_data.o 00:03:37.302 CXX test/cpp_headers/hexlify.o 00:03:37.302 CXX test/cpp_headers/idxd.o 00:03:37.302 CXX test/cpp_headers/idxd_spec.o 00:03:37.302 CXX test/cpp_headers/ioat.o 00:03:37.302 CXX test/cpp_headers/init.o 00:03:37.302 CXX test/cpp_headers/iscsi_spec.o 00:03:37.302 CXX test/cpp_headers/json.o 00:03:37.302 CXX test/cpp_headers/ioat_spec.o 00:03:37.302 CXX test/cpp_headers/keyring.o 00:03:37.302 CXX test/cpp_headers/jsonrpc.o 00:03:37.575 CXX test/cpp_headers/likely.o 00:03:37.575 CXX test/cpp_headers/log.o 00:03:37.575 CXX test/cpp_headers/keyring_module.o 00:03:37.575 CXX test/cpp_headers/md5.o 00:03:37.575 CXX test/cpp_headers/lvol.o 00:03:37.575 CXX test/cpp_headers/memory.o 00:03:37.575 CXX test/cpp_headers/mmio.o 00:03:37.575 CXX test/cpp_headers/nbd.o 00:03:37.575 CXX test/cpp_headers/notify.o 00:03:37.575 CXX test/cpp_headers/net.o 00:03:37.575 CXX test/cpp_headers/nvme.o 00:03:37.575 CXX test/cpp_headers/nvme_intel.o 00:03:37.575 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:37.575 CXX test/cpp_headers/nvme_ocssd.o 00:03:37.575 CXX test/cpp_headers/nvme_spec.o 00:03:37.575 CXX test/cpp_headers/nvme_zns.o 00:03:37.575 CXX test/cpp_headers/nvmf_cmd.o 00:03:37.575 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:37.575 CXX test/cpp_headers/nvmf.o 00:03:37.575 CXX test/cpp_headers/nvmf_spec.o 00:03:37.575 CXX test/cpp_headers/nvmf_transport.o 00:03:37.575 CXX test/cpp_headers/opal.o 00:03:37.575 CC test/thread/poller_perf/poller_perf.o 00:03:37.575 CC examples/util/zipf/zipf.o 00:03:37.575 CC examples/ioat/perf/perf.o 00:03:37.575 CC test/app/jsoncat/jsoncat.o 00:03:37.575 CC test/env/vtophys/vtophys.o 00:03:37.575 CC test/app/histogram_perf/histogram_perf.o 00:03:37.575 CC test/app/stub/stub.o 00:03:37.575 CC test/env/memory/memory_ut.o 00:03:37.575 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:37.575 CC examples/ioat/verify/verify.o 00:03:37.575 CC test/app/bdev_svc/bdev_svc.o 00:03:37.575 CC test/env/pci/pci_ut.o 00:03:37.575 CC app/fio/nvme/fio_plugin.o 00:03:37.575 CC test/dma/test_dma/test_dma.o 00:03:37.575 CC app/fio/bdev/fio_plugin.o 00:03:37.575 LINK spdk_lspci 00:03:37.838 LINK rpc_client_test 00:03:37.838 LINK spdk_nvme_discover 00:03:37.838 LINK iscsi_tgt 00:03:38.099 LINK interrupt_tgt 00:03:38.099 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:38.099 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.099 LINK vtophys 00:03:38.099 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.099 LINK histogram_perf 00:03:38.099 CXX test/cpp_headers/opal_spec.o 00:03:38.099 CXX test/cpp_headers/pci_ids.o 00:03:38.099 CXX test/cpp_headers/pipe.o 00:03:38.099 CXX test/cpp_headers/queue.o 00:03:38.099 LINK nvmf_tgt 00:03:38.099 LINK spdk_trace_record 00:03:38.099 CXX test/cpp_headers/reduce.o 00:03:38.099 CXX test/cpp_headers/rpc.o 00:03:38.099 CXX test/cpp_headers/scheduler.o 00:03:38.099 CXX test/cpp_headers/scsi.o 00:03:38.099 CXX test/cpp_headers/scsi_spec.o 00:03:38.099 CXX test/cpp_headers/sock.o 00:03:38.099 LINK env_dpdk_post_init 00:03:38.099 CXX test/cpp_headers/stdinc.o 00:03:38.099 LINK spdk_tgt 00:03:38.099 CXX test/cpp_headers/string.o 00:03:38.099 CXX test/cpp_headers/thread.o 00:03:38.099 CXX test/cpp_headers/trace.o 00:03:38.099 CXX test/cpp_headers/trace_parser.o 00:03:38.099 CXX test/cpp_headers/tree.o 00:03:38.099 CXX test/cpp_headers/ublk.o 00:03:38.099 CXX test/cpp_headers/util.o 00:03:38.099 CXX test/cpp_headers/uuid.o 00:03:38.099 CXX test/cpp_headers/version.o 00:03:38.099 CXX test/cpp_headers/vfio_user_pci.o 00:03:38.099 CXX test/cpp_headers/vfio_user_spec.o 00:03:38.099 CXX test/cpp_headers/vhost.o 00:03:38.099 CXX test/cpp_headers/vmd.o 00:03:38.099 CXX test/cpp_headers/xor.o 00:03:38.099 CXX test/cpp_headers/zipf.o 00:03:38.099 LINK poller_perf 00:03:38.099 LINK jsoncat 00:03:38.099 LINK zipf 00:03:38.099 LINK spdk_dd 00:03:38.099 LINK verify 00:03:38.099 LINK stub 00:03:38.099 LINK bdev_svc 00:03:38.356 LINK ioat_perf 00:03:38.356 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:38.356 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:38.356 LINK spdk_trace 00:03:38.615 LINK pci_ut 00:03:38.615 LINK test_dma 00:03:38.615 LINK nvme_fuzz 00:03:38.615 LINK spdk_nvme 00:03:38.615 CC test/event/event_perf/event_perf.o 00:03:38.615 CC test/event/reactor/reactor.o 00:03:38.615 LINK spdk_top 00:03:38.615 CC test/event/reactor_perf/reactor_perf.o 00:03:38.615 LINK spdk_bdev 00:03:38.615 CC test/event/app_repeat/app_repeat.o 00:03:38.615 LINK vhost_fuzz 00:03:38.615 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.615 CC examples/vmd/led/led.o 00:03:38.615 CC examples/sock/hello_world/hello_sock.o 00:03:38.615 CC test/event/scheduler/scheduler.o 00:03:38.615 CC examples/idxd/perf/perf.o 00:03:38.615 CC examples/thread/thread/thread_ex.o 00:03:38.615 LINK spdk_nvme_perf 00:03:38.615 LINK spdk_nvme_identify 00:03:39.002 CC app/vhost/vhost.o 00:03:39.002 LINK mem_callbacks 00:03:39.002 LINK reactor 00:03:39.002 LINK event_perf 00:03:39.002 LINK reactor_perf 00:03:39.002 LINK lsvmd 00:03:39.002 LINK app_repeat 00:03:39.002 LINK led 00:03:39.002 LINK hello_sock 00:03:39.002 LINK scheduler 00:03:39.002 LINK vhost 00:03:39.002 LINK thread 00:03:39.002 LINK idxd_perf 00:03:39.002 CC test/nvme/overhead/overhead.o 00:03:39.002 CC test/nvme/sgl/sgl.o 00:03:39.002 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:39.002 CC test/nvme/e2edp/nvme_dp.o 00:03:39.002 CC test/nvme/boot_partition/boot_partition.o 00:03:39.002 CC test/nvme/reserve/reserve.o 00:03:39.002 CC test/nvme/err_injection/err_injection.o 00:03:39.002 CC test/nvme/aer/aer.o 00:03:39.002 CC test/nvme/cuse/cuse.o 00:03:39.002 CC test/nvme/startup/startup.o 00:03:39.002 CC test/nvme/connect_stress/connect_stress.o 00:03:39.002 CC test/nvme/reset/reset.o 00:03:39.002 CC test/nvme/compliance/nvme_compliance.o 00:03:39.002 CC test/nvme/simple_copy/simple_copy.o 00:03:39.002 CC test/nvme/fused_ordering/fused_ordering.o 00:03:39.002 CC test/nvme/fdp/fdp.o 00:03:39.002 CC test/accel/dif/dif.o 00:03:39.002 CC test/blobfs/mkfs/mkfs.o 00:03:39.283 LINK memory_ut 00:03:39.283 CC test/lvol/esnap/esnap.o 00:03:39.283 LINK boot_partition 00:03:39.283 LINK startup 00:03:39.283 LINK connect_stress 00:03:39.283 LINK doorbell_aers 00:03:39.283 LINK err_injection 00:03:39.283 LINK reserve 00:03:39.283 LINK fused_ordering 00:03:39.283 LINK mkfs 00:03:39.283 LINK simple_copy 00:03:39.283 LINK reset 00:03:39.283 LINK overhead 00:03:39.283 LINK nvme_dp 00:03:39.283 LINK sgl 00:03:39.283 LINK aer 00:03:39.283 LINK nvme_compliance 00:03:39.283 LINK fdp 00:03:39.283 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.283 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:39.283 CC examples/nvme/abort/abort.o 00:03:39.283 CC examples/nvme/arbitration/arbitration.o 00:03:39.283 CC examples/nvme/hotplug/hotplug.o 00:03:39.283 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:39.283 CC examples/nvme/reconnect/reconnect.o 00:03:39.283 CC examples/nvme/hello_world/hello_world.o 00:03:39.553 CC examples/accel/perf/accel_perf.o 00:03:39.553 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:39.553 CC examples/blob/hello_world/hello_blob.o 00:03:39.553 CC examples/blob/cli/blobcli.o 00:03:39.553 LINK cmb_copy 00:03:39.553 LINK pmr_persistence 00:03:39.553 LINK iscsi_fuzz 00:03:39.553 LINK hello_world 00:03:39.553 LINK hotplug 00:03:39.553 LINK dif 00:03:39.830 LINK reconnect 00:03:39.830 LINK arbitration 00:03:39.830 LINK abort 00:03:39.830 LINK hello_blob 00:03:39.830 LINK hello_fsdev 00:03:39.830 LINK nvme_manage 00:03:39.830 LINK accel_perf 00:03:39.830 LINK blobcli 00:03:40.107 LINK cuse 00:03:40.107 CC test/bdev/bdevio/bdevio.o 00:03:40.366 CC examples/bdev/hello_world/hello_bdev.o 00:03:40.366 CC examples/bdev/bdevperf/bdevperf.o 00:03:40.625 LINK bdevio 00:03:40.625 LINK hello_bdev 00:03:40.883 LINK bdevperf 00:03:41.450 CC examples/nvmf/nvmf/nvmf.o 00:03:41.709 LINK nvmf 00:03:43.088 LINK esnap 00:03:43.088 00:03:43.088 real 0m55.766s 00:03:43.088 user 8m17.852s 00:03:43.088 sys 3m43.797s 00:03:43.088 19:05:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.088 19:05:06 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.088 ************************************ 00:03:43.088 END TEST make 00:03:43.088 ************************************ 00:03:43.088 19:05:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.088 19:05:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.088 19:05:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.088 19:05:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.088 19:05:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.088 19:05:06 -- pm/common@44 -- $ pid=3467117 00:03:43.088 19:05:06 -- pm/common@50 -- $ kill -TERM 3467117 00:03:43.088 19:05:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.088 19:05:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.088 19:05:06 -- pm/common@44 -- $ pid=3467119 00:03:43.088 19:05:06 -- pm/common@50 -- $ kill -TERM 3467119 00:03:43.088 19:05:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.088 19:05:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:43.088 19:05:06 -- pm/common@44 -- $ pid=3467120 00:03:43.088 19:05:06 -- pm/common@50 -- $ kill -TERM 3467120 00:03:43.088 19:05:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.088 19:05:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:43.088 19:05:06 -- pm/common@44 -- $ pid=3467144 00:03:43.088 19:05:06 -- pm/common@50 -- $ sudo -E kill -TERM 3467144 00:03:43.088 19:05:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:43.088 19:05:06 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:43.088 19:05:06 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.088 19:05:06 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.088 19:05:06 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:43.348 19:05:06 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:43.348 19:05:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.348 19:05:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.348 19:05:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.348 19:05:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.348 19:05:06 -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.348 19:05:06 -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.348 19:05:06 -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.348 19:05:06 -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.348 19:05:06 -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.348 19:05:06 -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.348 19:05:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.348 19:05:06 -- scripts/common.sh@344 -- # case "$op" in 00:03:43.348 19:05:06 -- scripts/common.sh@345 -- # : 1 00:03:43.348 19:05:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.348 19:05:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.348 19:05:06 -- scripts/common.sh@365 -- # decimal 1 00:03:43.348 19:05:06 -- scripts/common.sh@353 -- # local d=1 00:03:43.348 19:05:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.348 19:05:06 -- scripts/common.sh@355 -- # echo 1 00:03:43.348 19:05:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.348 19:05:06 -- scripts/common.sh@366 -- # decimal 2 00:03:43.348 19:05:06 -- scripts/common.sh@353 -- # local d=2 00:03:43.348 19:05:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.348 19:05:06 -- scripts/common.sh@355 -- # echo 2 00:03:43.348 19:05:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.348 19:05:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.348 19:05:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.348 19:05:06 -- scripts/common.sh@368 -- # return 0 00:03:43.348 19:05:06 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.348 19:05:06 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.348 --rc genhtml_branch_coverage=1 00:03:43.348 --rc genhtml_function_coverage=1 00:03:43.348 --rc genhtml_legend=1 00:03:43.348 --rc geninfo_all_blocks=1 00:03:43.348 --rc geninfo_unexecuted_blocks=1 00:03:43.348 00:03:43.348 ' 00:03:43.348 19:05:06 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.348 --rc genhtml_branch_coverage=1 00:03:43.348 --rc genhtml_function_coverage=1 00:03:43.348 --rc genhtml_legend=1 00:03:43.348 --rc geninfo_all_blocks=1 00:03:43.348 --rc geninfo_unexecuted_blocks=1 00:03:43.348 00:03:43.348 ' 00:03:43.348 19:05:06 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.348 --rc genhtml_branch_coverage=1 00:03:43.348 --rc genhtml_function_coverage=1 00:03:43.348 --rc genhtml_legend=1 00:03:43.348 --rc geninfo_all_blocks=1 00:03:43.348 --rc geninfo_unexecuted_blocks=1 00:03:43.348 00:03:43.348 ' 00:03:43.348 19:05:06 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.348 --rc genhtml_branch_coverage=1 00:03:43.348 --rc genhtml_function_coverage=1 00:03:43.348 --rc genhtml_legend=1 00:03:43.348 --rc geninfo_all_blocks=1 00:03:43.348 --rc geninfo_unexecuted_blocks=1 00:03:43.348 00:03:43.348 ' 00:03:43.348 19:05:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:43.348 19:05:06 -- nvmf/common.sh@7 -- # uname -s 00:03:43.348 19:05:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.348 19:05:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.348 19:05:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.348 19:05:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.348 19:05:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.348 19:05:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.348 19:05:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.348 19:05:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.348 19:05:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.348 19:05:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.348 19:05:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:43.348 19:05:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:43.348 19:05:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.348 19:05:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.348 19:05:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:43.348 19:05:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.348 19:05:06 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:43.348 19:05:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:43.348 19:05:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.348 19:05:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.348 19:05:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.348 19:05:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.348 19:05:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.348 19:05:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.348 19:05:06 -- paths/export.sh@5 -- # export PATH 00:03:43.348 19:05:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.348 19:05:06 -- nvmf/common.sh@51 -- # : 0 00:03:43.348 19:05:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:43.348 19:05:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:43.348 19:05:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.348 19:05:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.348 19:05:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.348 19:05:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:43.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:43.348 19:05:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:43.348 19:05:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:43.349 19:05:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:43.349 19:05:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.349 19:05:06 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.349 19:05:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.349 19:05:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.349 19:05:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:43.349 19:05:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.349 19:05:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:43.349 19:05:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.349 19:05:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.349 19:05:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.349 19:05:06 -- spdk/autotest.sh@48 -- # udevadm_pid=3529609 00:03:43.349 19:05:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.349 19:05:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.349 19:05:06 -- pm/common@17 -- # local monitor 00:03:43.349 19:05:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.349 19:05:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.349 19:05:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.349 19:05:06 -- pm/common@21 -- # date +%s 00:03:43.349 19:05:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.349 19:05:06 -- pm/common@21 -- # date +%s 00:03:43.349 19:05:06 -- pm/common@25 -- # sleep 1 00:03:43.349 19:05:06 -- pm/common@21 -- # date +%s 00:03:43.349 19:05:06 -- pm/common@21 -- # date +%s 00:03:43.349 19:05:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732644306 00:03:43.349 19:05:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732644306 00:03:43.349 19:05:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732644306 00:03:43.349 19:05:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732644306 00:03:43.349 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732644306_collect-cpu-load.pm.log 00:03:43.349 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732644306_collect-vmstat.pm.log 00:03:43.349 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732644306_collect-cpu-temp.pm.log 00:03:43.349 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732644306_collect-bmc-pm.bmc.pm.log 00:03:44.288 19:05:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.288 19:05:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.288 19:05:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.288 19:05:07 -- common/autotest_common.sh@10 -- # set +x 00:03:44.288 19:05:07 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.288 19:05:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:44.288 19:05:07 -- common/autotest_common.sh@10 -- # set +x 00:03:44.288 19:05:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:44.288 19:05:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.288 19:05:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.288 19:05:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:44.288 19:05:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.288 19:05:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.288 19:05:07 -- common/autotest_common.sh@1457 -- # uname 00:03:44.288 19:05:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:44.288 19:05:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.288 19:05:07 -- common/autotest_common.sh@1477 -- # uname 00:03:44.288 19:05:07 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:44.288 19:05:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.288 19:05:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.548 lcov: LCOV version 1.15 00:03:44.548 19:05:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:56.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:56.762 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:08.978 19:05:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:08.978 19:05:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.237 19:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:09.237 19:05:32 -- spdk/autotest.sh@78 -- # rm -f 00:04:09.237 19:05:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.528 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:12.528 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:12.528 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:12.528 19:05:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.528 19:05:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:12.528 19:05:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:12.528 19:05:35 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:12.528 19:05:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.528 19:05:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:12.528 19:05:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:12.528 19:05:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.528 19:05:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.528 19:05:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.528 19:05:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.528 19:05:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.528 19:05:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.528 19:05:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.528 19:05:35 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.528 No valid GPT data, bailing 00:04:12.528 19:05:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.528 19:05:35 -- scripts/common.sh@394 -- # pt= 00:04:12.528 19:05:35 -- scripts/common.sh@395 -- # return 1 00:04:12.528 19:05:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.528 1+0 records in 00:04:12.528 1+0 records out 00:04:12.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447952 s, 234 MB/s 00:04:12.528 19:05:35 -- spdk/autotest.sh@105 -- # sync 00:04:12.528 19:05:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.528 19:05:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.528 19:05:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:17.810 19:05:40 -- spdk/autotest.sh@111 -- # uname -s 00:04:17.810 19:05:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:17.810 19:05:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:17.810 19:05:40 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:21.105 Hugepages 00:04:21.105 node hugesize free / total 00:04:21.105 node0 1048576kB 0 / 0 00:04:21.105 node0 2048kB 0 / 0 00:04:21.105 node1 1048576kB 0 / 0 00:04:21.105 node1 2048kB 0 / 0 00:04:21.105 00:04:21.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.105 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:21.105 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:21.105 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:21.105 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:21.105 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:21.105 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:21.105 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:21.106 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:21.106 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:21.106 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:21.106 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:21.106 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:21.106 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:21.106 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:21.106 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:21.106 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:21.106 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:21.106 19:05:43 -- spdk/autotest.sh@117 -- # uname -s 00:04:21.106 19:05:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:21.106 19:05:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:21.106 19:05:43 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.670 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.670 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.929 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:23.929 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:25.308 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:25.308 19:05:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:26.247 19:05:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:26.247 19:05:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:26.247 19:05:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:26.247 19:05:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:26.247 19:05:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:26.247 19:05:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:26.247 19:05:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.247 19:05:49 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:26.247 19:05:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:26.507 19:05:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:26.507 19:05:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:26.507 19:05:49 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.044 Waiting for block devices as requested 00:04:29.303 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:29.303 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:29.303 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:29.562 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:29.562 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:29.562 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:29.821 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:29.821 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:29.821 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:29.821 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:30.080 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:30.080 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:30.080 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:30.340 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:30.340 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:30.340 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:30.599 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:30.599 19:05:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.599 19:05:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:30.599 19:05:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:30.599 19:05:53 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:30.599 19:05:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:30.599 19:05:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:30.599 19:05:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:30.599 19:05:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:30.599 19:05:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:30.599 19:05:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:30.599 19:05:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.599 19:05:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:30.599 19:05:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.599 19:05:53 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:30.599 19:05:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.599 19:05:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.599 19:05:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:30.599 19:05:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.599 19:05:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.599 19:05:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.599 19:05:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.599 19:05:53 -- common/autotest_common.sh@1543 -- # continue 00:04:30.599 19:05:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:30.599 19:05:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.599 19:05:53 -- common/autotest_common.sh@10 -- # set +x 00:04:30.599 19:05:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:30.599 19:05:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.599 19:05:53 -- common/autotest_common.sh@10 -- # set +x 00:04:30.599 19:05:53 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.889 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:33.889 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:35.266 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:35.266 19:05:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:35.266 19:05:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.266 19:05:58 -- common/autotest_common.sh@10 -- # set +x 00:04:35.266 19:05:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:35.266 19:05:58 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:35.266 19:05:58 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:35.266 19:05:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:35.266 19:05:58 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:35.266 19:05:58 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:35.266 19:05:58 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:35.266 19:05:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:35.266 19:05:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:35.266 19:05:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:35.266 19:05:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.266 19:05:58 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.266 19:05:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:35.266 19:05:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:35.266 19:05:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:35.266 19:05:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.266 19:05:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:35.266 19:05:58 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:35.266 19:05:58 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:35.266 19:05:58 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:35.266 19:05:58 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:35.266 19:05:58 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:35.266 19:05:58 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:35.266 19:05:58 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3543844 00:04:35.266 19:05:58 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.266 19:05:58 -- common/autotest_common.sh@1585 -- # waitforlisten 3543844 00:04:35.266 19:05:58 -- common/autotest_common.sh@835 -- # '[' -z 3543844 ']' 00:04:35.266 19:05:58 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.266 19:05:58 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.266 19:05:58 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.266 19:05:58 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.266 19:05:58 -- common/autotest_common.sh@10 -- # set +x 00:04:35.266 [2024-11-26 19:05:58.318826] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:04:35.266 [2024-11-26 19:05:58.318873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543844 ] 00:04:35.525 [2024-11-26 19:05:58.394504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.525 [2024-11-26 19:05:58.434423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.783 19:05:58 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.783 19:05:58 -- common/autotest_common.sh@868 -- # return 0 00:04:35.783 19:05:58 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:35.783 19:05:58 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:35.783 19:05:58 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:39.073 nvme0n1 00:04:39.073 19:06:01 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:39.073 [2024-11-26 19:06:01.838534] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:39.073 request: 00:04:39.073 { 00:04:39.073 "nvme_ctrlr_name": "nvme0", 00:04:39.073 "password": "test", 00:04:39.073 "method": "bdev_nvme_opal_revert", 00:04:39.073 "req_id": 1 00:04:39.073 } 00:04:39.073 Got JSON-RPC error response 00:04:39.073 response: 00:04:39.073 { 00:04:39.073 "code": -32602, 00:04:39.073 "message": "Invalid parameters" 00:04:39.073 } 00:04:39.073 19:06:01 -- common/autotest_common.sh@1591 -- # true 00:04:39.073 19:06:01 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:39.073 19:06:01 -- common/autotest_common.sh@1595 -- # killprocess 3543844 00:04:39.073 19:06:01 -- common/autotest_common.sh@954 -- # '[' -z 3543844 ']' 00:04:39.073 19:06:01 -- common/autotest_common.sh@958 -- # kill -0 3543844 00:04:39.073 19:06:01 -- common/autotest_common.sh@959 -- # uname 00:04:39.073 19:06:01 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.073 19:06:01 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3543844 00:04:39.073 19:06:01 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.073 19:06:01 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.073 19:06:01 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3543844' 00:04:39.073 killing process with pid 3543844 00:04:39.073 19:06:01 -- common/autotest_common.sh@973 -- # kill 3543844 00:04:39.073 19:06:01 -- common/autotest_common.sh@978 -- # wait 3543844 00:04:40.978 19:06:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:40.978 19:06:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:40.978 19:06:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:40.978 19:06:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:40.978 19:06:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:40.978 19:06:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.978 19:06:04 -- common/autotest_common.sh@10 -- # set +x 00:04:40.978 19:06:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:40.978 19:06:04 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:40.978 19:06:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.978 19:06:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.978 19:06:04 -- common/autotest_common.sh@10 -- # set +x 00:04:40.978 ************************************ 00:04:40.978 START TEST env 00:04:40.978 ************************************ 00:04:40.978 19:06:04 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.238 * Looking for test storage... 00:04:41.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.238 19:06:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.238 19:06:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.238 19:06:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.238 19:06:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.238 19:06:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.238 19:06:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.238 19:06:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.238 19:06:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.238 19:06:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.238 19:06:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.238 19:06:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.238 19:06:04 env -- scripts/common.sh@344 -- # case "$op" in 00:04:41.238 19:06:04 env -- scripts/common.sh@345 -- # : 1 00:04:41.238 19:06:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.238 19:06:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.238 19:06:04 env -- scripts/common.sh@365 -- # decimal 1 00:04:41.238 19:06:04 env -- scripts/common.sh@353 -- # local d=1 00:04:41.238 19:06:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.238 19:06:04 env -- scripts/common.sh@355 -- # echo 1 00:04:41.238 19:06:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.238 19:06:04 env -- scripts/common.sh@366 -- # decimal 2 00:04:41.238 19:06:04 env -- scripts/common.sh@353 -- # local d=2 00:04:41.238 19:06:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.238 19:06:04 env -- scripts/common.sh@355 -- # echo 2 00:04:41.238 19:06:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.238 19:06:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.238 19:06:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.238 19:06:04 env -- scripts/common.sh@368 -- # return 0 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.238 --rc genhtml_branch_coverage=1 00:04:41.238 --rc genhtml_function_coverage=1 00:04:41.238 --rc genhtml_legend=1 00:04:41.238 --rc geninfo_all_blocks=1 00:04:41.238 --rc geninfo_unexecuted_blocks=1 00:04:41.238 00:04:41.238 ' 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.238 --rc genhtml_branch_coverage=1 00:04:41.238 --rc genhtml_function_coverage=1 00:04:41.238 --rc genhtml_legend=1 00:04:41.238 --rc geninfo_all_blocks=1 00:04:41.238 --rc geninfo_unexecuted_blocks=1 00:04:41.238 00:04:41.238 ' 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.238 --rc genhtml_branch_coverage=1 00:04:41.238 --rc genhtml_function_coverage=1 00:04:41.238 --rc genhtml_legend=1 00:04:41.238 --rc geninfo_all_blocks=1 00:04:41.238 --rc geninfo_unexecuted_blocks=1 00:04:41.238 00:04:41.238 ' 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.238 --rc genhtml_branch_coverage=1 00:04:41.238 --rc genhtml_function_coverage=1 00:04:41.238 --rc genhtml_legend=1 00:04:41.238 --rc geninfo_all_blocks=1 00:04:41.238 --rc geninfo_unexecuted_blocks=1 00:04:41.238 00:04:41.238 ' 00:04:41.238 19:06:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.238 19:06:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.238 19:06:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.238 ************************************ 00:04:41.239 START TEST env_memory 00:04:41.239 ************************************ 00:04:41.239 19:06:04 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.239 00:04:41.239 00:04:41.239 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.239 http://cunit.sourceforge.net/ 00:04:41.239 00:04:41.239 00:04:41.239 Suite: memory 00:04:41.239 Test: alloc and free memory map ...[2024-11-26 19:06:04.307313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:41.239 passed 00:04:41.239 Test: mem map translation ...[2024-11-26 19:06:04.326069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:41.239 [2024-11-26 19:06:04.326081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:41.239 [2024-11-26 19:06:04.326113] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:41.239 [2024-11-26 19:06:04.326135] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:41.499 passed 00:04:41.499 Test: mem map registration ...[2024-11-26 19:06:04.362961] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:41.499 [2024-11-26 19:06:04.362974] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:41.499 passed 00:04:41.499 Test: mem map adjacent registrations ...passed 00:04:41.499 00:04:41.499 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.499 suites 1 1 n/a 0 0 00:04:41.499 tests 4 4 4 0 0 00:04:41.499 asserts 152 152 152 0 n/a 00:04:41.499 00:04:41.499 Elapsed time = 0.134 seconds 00:04:41.499 00:04:41.499 real 0m0.147s 00:04:41.499 user 0m0.138s 00:04:41.499 sys 0m0.009s 00:04:41.499 19:06:04 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.499 19:06:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:41.499 ************************************ 00:04:41.499 END TEST env_memory 00:04:41.499 ************************************ 00:04:41.499 19:06:04 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:41.499 19:06:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.499 19:06:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.499 19:06:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.499 ************************************ 00:04:41.499 START TEST env_vtophys 00:04:41.499 ************************************ 00:04:41.499 19:06:04 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:41.499 EAL: lib.eal log level changed from notice to debug 00:04:41.499 EAL: Detected lcore 0 as core 0 on socket 0 00:04:41.499 EAL: Detected lcore 1 as core 1 on socket 0 00:04:41.499 EAL: Detected lcore 2 as core 2 on socket 0 00:04:41.499 EAL: Detected lcore 3 as core 3 on socket 0 00:04:41.499 EAL: Detected lcore 4 as core 4 on socket 0 00:04:41.499 EAL: Detected lcore 5 as core 5 on socket 0 00:04:41.499 EAL: Detected lcore 6 as core 6 on socket 0 00:04:41.499 EAL: Detected lcore 7 as core 8 on socket 0 00:04:41.499 EAL: Detected lcore 8 as core 9 on socket 0 00:04:41.499 EAL: Detected lcore 9 as core 10 on socket 0 00:04:41.499 EAL: Detected lcore 10 as core 11 on socket 0 00:04:41.499 EAL: Detected lcore 11 as core 12 on socket 0 00:04:41.499 EAL: Detected lcore 12 as core 13 on socket 0 00:04:41.499 EAL: Detected lcore 13 as core 16 on socket 0 00:04:41.499 EAL: Detected lcore 14 as core 17 on socket 0 00:04:41.499 EAL: Detected lcore 15 as core 18 on socket 0 00:04:41.499 EAL: Detected lcore 16 as core 19 on socket 0 00:04:41.499 EAL: Detected lcore 17 as core 20 on socket 0 00:04:41.499 EAL: Detected lcore 18 as core 21 on socket 0 00:04:41.499 EAL: Detected lcore 19 as core 25 on socket 0 00:04:41.499 EAL: Detected lcore 20 as core 26 on socket 0 00:04:41.499 EAL: Detected lcore 21 as core 27 on socket 0 00:04:41.499 EAL: Detected lcore 22 as core 28 on socket 0 00:04:41.499 EAL: Detected lcore 23 as core 29 on socket 0 00:04:41.499 EAL: Detected lcore 24 as core 0 on socket 1 00:04:41.499 EAL: Detected lcore 25 as core 1 on socket 1 00:04:41.499 EAL: Detected lcore 26 as core 2 on socket 1 00:04:41.499 EAL: Detected lcore 27 as core 3 on socket 1 00:04:41.499 EAL: Detected lcore 28 as core 4 on socket 1 00:04:41.499 EAL: Detected lcore 29 as core 5 on socket 1 00:04:41.499 EAL: Detected lcore 30 as core 6 on socket 1 00:04:41.499 EAL: Detected lcore 31 as core 8 on socket 1 00:04:41.499 EAL: Detected lcore 32 as core 10 on socket 1 00:04:41.499 EAL: Detected lcore 33 as core 11 on socket 1 00:04:41.499 EAL: Detected lcore 34 as core 12 on socket 1 00:04:41.499 EAL: Detected lcore 35 as core 13 on socket 1 00:04:41.499 EAL: Detected lcore 36 as core 16 on socket 1 00:04:41.499 EAL: Detected lcore 37 as core 17 on socket 1 00:04:41.499 EAL: Detected lcore 38 as core 18 on socket 1 00:04:41.499 EAL: Detected lcore 39 as core 19 on socket 1 00:04:41.499 EAL: Detected lcore 40 as core 20 on socket 1 00:04:41.499 EAL: Detected lcore 41 as core 21 on socket 1 00:04:41.499 EAL: Detected lcore 42 as core 24 on socket 1 00:04:41.499 EAL: Detected lcore 43 as core 25 on socket 1 00:04:41.499 EAL: Detected lcore 44 as core 26 on socket 1 00:04:41.499 EAL: Detected lcore 45 as core 27 on socket 1 00:04:41.499 EAL: Detected lcore 46 as core 28 on socket 1 00:04:41.499 EAL: Detected lcore 47 as core 29 on socket 1 00:04:41.499 EAL: Detected lcore 48 as core 0 on socket 0 00:04:41.499 EAL: Detected lcore 49 as core 1 on socket 0 00:04:41.499 EAL: Detected lcore 50 as core 2 on socket 0 00:04:41.499 EAL: Detected lcore 51 as core 3 on socket 0 00:04:41.499 EAL: Detected lcore 52 as core 4 on socket 0 00:04:41.499 EAL: Detected lcore 53 as core 5 on socket 0 00:04:41.499 EAL: Detected lcore 54 as core 6 on socket 0 00:04:41.499 EAL: Detected lcore 55 as core 8 on socket 0 00:04:41.499 EAL: Detected lcore 56 as core 9 on socket 0 00:04:41.499 EAL: Detected lcore 57 as core 10 on socket 0 00:04:41.499 EAL: Detected lcore 58 as core 11 on socket 0 00:04:41.499 EAL: Detected lcore 59 as core 12 on socket 0 00:04:41.499 EAL: Detected lcore 60 as core 13 on socket 0 00:04:41.499 EAL: Detected lcore 61 as core 16 on socket 0 00:04:41.499 EAL: Detected lcore 62 as core 17 on socket 0 00:04:41.499 EAL: Detected lcore 63 as core 18 on socket 0 00:04:41.499 EAL: Detected lcore 64 as core 19 on socket 0 00:04:41.499 EAL: Detected lcore 65 as core 20 on socket 0 00:04:41.499 EAL: Detected lcore 66 as core 21 on socket 0 00:04:41.499 EAL: Detected lcore 67 as core 25 on socket 0 00:04:41.499 EAL: Detected lcore 68 as core 26 on socket 0 00:04:41.499 EAL: Detected lcore 69 as core 27 on socket 0 00:04:41.499 EAL: Detected lcore 70 as core 28 on socket 0 00:04:41.499 EAL: Detected lcore 71 as core 29 on socket 0 00:04:41.499 EAL: Detected lcore 72 as core 0 on socket 1 00:04:41.499 EAL: Detected lcore 73 as core 1 on socket 1 00:04:41.499 EAL: Detected lcore 74 as core 2 on socket 1 00:04:41.499 EAL: Detected lcore 75 as core 3 on socket 1 00:04:41.499 EAL: Detected lcore 76 as core 4 on socket 1 00:04:41.499 EAL: Detected lcore 77 as core 5 on socket 1 00:04:41.499 EAL: Detected lcore 78 as core 6 on socket 1 00:04:41.499 EAL: Detected lcore 79 as core 8 on socket 1 00:04:41.499 EAL: Detected lcore 80 as core 10 on socket 1 00:04:41.499 EAL: Detected lcore 81 as core 11 on socket 1 00:04:41.499 EAL: Detected lcore 82 as core 12 on socket 1 00:04:41.499 EAL: Detected lcore 83 as core 13 on socket 1 00:04:41.500 EAL: Detected lcore 84 as core 16 on socket 1 00:04:41.500 EAL: Detected lcore 85 as core 17 on socket 1 00:04:41.500 EAL: Detected lcore 86 as core 18 on socket 1 00:04:41.500 EAL: Detected lcore 87 as core 19 on socket 1 00:04:41.500 EAL: Detected lcore 88 as core 20 on socket 1 00:04:41.500 EAL: Detected lcore 89 as core 21 on socket 1 00:04:41.500 EAL: Detected lcore 90 as core 24 on socket 1 00:04:41.500 EAL: Detected lcore 91 as core 25 on socket 1 00:04:41.500 EAL: Detected lcore 92 as core 26 on socket 1 00:04:41.500 EAL: Detected lcore 93 as core 27 on socket 1 00:04:41.500 EAL: Detected lcore 94 as core 28 on socket 1 00:04:41.500 EAL: Detected lcore 95 as core 29 on socket 1 00:04:41.500 EAL: Maximum logical cores by configuration: 128 00:04:41.500 EAL: Detected CPU lcores: 96 00:04:41.500 EAL: Detected NUMA nodes: 2 00:04:41.500 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:41.500 EAL: Detected shared linkage of DPDK 00:04:41.500 EAL: No shared files mode enabled, IPC will be disabled 00:04:41.500 EAL: Bus pci wants IOVA as 'DC' 00:04:41.500 EAL: Buses did not request a specific IOVA mode. 00:04:41.500 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:41.500 EAL: Selected IOVA mode 'VA' 00:04:41.500 EAL: Probing VFIO support... 00:04:41.500 EAL: IOMMU type 1 (Type 1) is supported 00:04:41.500 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:41.500 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:41.500 EAL: VFIO support initialized 00:04:41.500 EAL: Ask a virtual area of 0x2e000 bytes 00:04:41.500 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:41.500 EAL: Setting up physically contiguous memory... 00:04:41.500 EAL: Setting maximum number of open files to 524288 00:04:41.500 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:41.500 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:41.500 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:41.500 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.500 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:41.500 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.500 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.500 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:41.500 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:41.500 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.500 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:41.500 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.500 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.500 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:41.500 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:41.500 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.500 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:41.500 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.500 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.500 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:41.500 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:41.500 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.500 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:41.500 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.500 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.500 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:41.500 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:41.500 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:41.500 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.500 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:41.500 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.500 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.500 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:41.500 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:41.500 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.500 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:41.500 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.500 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.500 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:41.500 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:41.500 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.500 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:41.500 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.500 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.500 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:41.500 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:41.500 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.500 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:41.500 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:41.500 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.500 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:41.500 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:41.500 EAL: Hugepages will be freed exactly as allocated. 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: TSC frequency is ~2100000 KHz 00:04:41.500 EAL: Main lcore 0 is ready (tid=7f33388e6a00;cpuset=[0]) 00:04:41.500 EAL: Trying to obtain current memory policy. 00:04:41.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.500 EAL: Restoring previous memory policy: 0 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was expanded by 2MB 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:41.500 EAL: Mem event callback 'spdk:(nil)' registered 00:04:41.500 00:04:41.500 00:04:41.500 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.500 http://cunit.sourceforge.net/ 00:04:41.500 00:04:41.500 00:04:41.500 Suite: components_suite 00:04:41.500 Test: vtophys_malloc_test ...passed 00:04:41.500 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:41.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.500 EAL: Restoring previous memory policy: 4 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was expanded by 4MB 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was shrunk by 4MB 00:04:41.500 EAL: Trying to obtain current memory policy. 00:04:41.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.500 EAL: Restoring previous memory policy: 4 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was expanded by 6MB 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was shrunk by 6MB 00:04:41.500 EAL: Trying to obtain current memory policy. 00:04:41.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.500 EAL: Restoring previous memory policy: 4 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was expanded by 10MB 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was shrunk by 10MB 00:04:41.500 EAL: Trying to obtain current memory policy. 00:04:41.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.500 EAL: Restoring previous memory policy: 4 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was expanded by 18MB 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was shrunk by 18MB 00:04:41.500 EAL: Trying to obtain current memory policy. 00:04:41.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.500 EAL: Restoring previous memory policy: 4 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was expanded by 34MB 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was shrunk by 34MB 00:04:41.500 EAL: Trying to obtain current memory policy. 00:04:41.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.500 EAL: Restoring previous memory policy: 4 00:04:41.500 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.500 EAL: request: mp_malloc_sync 00:04:41.500 EAL: No shared files mode enabled, IPC is disabled 00:04:41.500 EAL: Heap on socket 0 was expanded by 66MB 00:04:41.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.759 EAL: request: mp_malloc_sync 00:04:41.759 EAL: No shared files mode enabled, IPC is disabled 00:04:41.759 EAL: Heap on socket 0 was shrunk by 66MB 00:04:41.759 EAL: Trying to obtain current memory policy. 00:04:41.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.759 EAL: Restoring previous memory policy: 4 00:04:41.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.759 EAL: request: mp_malloc_sync 00:04:41.759 EAL: No shared files mode enabled, IPC is disabled 00:04:41.759 EAL: Heap on socket 0 was expanded by 130MB 00:04:41.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.759 EAL: request: mp_malloc_sync 00:04:41.759 EAL: No shared files mode enabled, IPC is disabled 00:04:41.759 EAL: Heap on socket 0 was shrunk by 130MB 00:04:41.759 EAL: Trying to obtain current memory policy. 00:04:41.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.759 EAL: Restoring previous memory policy: 4 00:04:41.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.759 EAL: request: mp_malloc_sync 00:04:41.759 EAL: No shared files mode enabled, IPC is disabled 00:04:41.759 EAL: Heap on socket 0 was expanded by 258MB 00:04:41.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.759 EAL: request: mp_malloc_sync 00:04:41.759 EAL: No shared files mode enabled, IPC is disabled 00:04:41.759 EAL: Heap on socket 0 was shrunk by 258MB 00:04:41.759 EAL: Trying to obtain current memory policy. 00:04:41.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.018 EAL: Restoring previous memory policy: 4 00:04:42.018 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.018 EAL: request: mp_malloc_sync 00:04:42.018 EAL: No shared files mode enabled, IPC is disabled 00:04:42.018 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.018 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.018 EAL: request: mp_malloc_sync 00:04:42.018 EAL: No shared files mode enabled, IPC is disabled 00:04:42.018 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.018 EAL: Trying to obtain current memory policy. 00:04:42.018 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.277 EAL: Restoring previous memory policy: 4 00:04:42.277 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.277 EAL: request: mp_malloc_sync 00:04:42.277 EAL: No shared files mode enabled, IPC is disabled 00:04:42.277 EAL: Heap on socket 0 was expanded by 1026MB 00:04:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.536 EAL: request: mp_malloc_sync 00:04:42.536 EAL: No shared files mode enabled, IPC is disabled 00:04:42.536 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:42.536 passed 00:04:42.536 00:04:42.536 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.536 suites 1 1 n/a 0 0 00:04:42.536 tests 2 2 2 0 0 00:04:42.536 asserts 497 497 497 0 n/a 00:04:42.536 00:04:42.536 Elapsed time = 0.966 seconds 00:04:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.536 EAL: request: mp_malloc_sync 00:04:42.536 EAL: No shared files mode enabled, IPC is disabled 00:04:42.536 EAL: Heap on socket 0 was shrunk by 2MB 00:04:42.536 EAL: No shared files mode enabled, IPC is disabled 00:04:42.536 EAL: No shared files mode enabled, IPC is disabled 00:04:42.536 EAL: No shared files mode enabled, IPC is disabled 00:04:42.536 00:04:42.536 real 0m1.093s 00:04:42.536 user 0m0.644s 00:04:42.536 sys 0m0.426s 00:04:42.536 19:06:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.536 19:06:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:42.536 ************************************ 00:04:42.536 END TEST env_vtophys 00:04:42.536 ************************************ 00:04:42.536 19:06:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.536 19:06:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.536 19:06:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.536 19:06:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.795 ************************************ 00:04:42.796 START TEST env_pci 00:04:42.796 ************************************ 00:04:42.796 19:06:05 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.796 00:04:42.796 00:04:42.796 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.796 http://cunit.sourceforge.net/ 00:04:42.796 00:04:42.796 00:04:42.796 Suite: pci 00:04:42.796 Test: pci_hook ...[2024-11-26 19:06:05.666580] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3545396 has claimed it 00:04:42.796 EAL: Cannot find device (10000:00:01.0) 00:04:42.796 EAL: Failed to attach device on primary process 00:04:42.796 passed 00:04:42.796 00:04:42.796 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.796 suites 1 1 n/a 0 0 00:04:42.796 tests 1 1 1 0 0 00:04:42.796 asserts 25 25 25 0 n/a 00:04:42.796 00:04:42.796 Elapsed time = 0.028 seconds 00:04:42.796 00:04:42.796 real 0m0.048s 00:04:42.796 user 0m0.012s 00:04:42.796 sys 0m0.036s 00:04:42.796 19:06:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.796 19:06:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:42.796 ************************************ 00:04:42.796 END TEST env_pci 00:04:42.796 ************************************ 00:04:42.796 19:06:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:42.796 19:06:05 env -- env/env.sh@15 -- # uname 00:04:42.796 19:06:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:42.796 19:06:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:42.796 19:06:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.796 19:06:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:42.796 19:06:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.796 19:06:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.796 ************************************ 00:04:42.796 START TEST env_dpdk_post_init 00:04:42.796 ************************************ 00:04:42.796 19:06:05 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.796 EAL: Detected CPU lcores: 96 00:04:42.796 EAL: Detected NUMA nodes: 2 00:04:42.796 EAL: Detected shared linkage of DPDK 00:04:42.796 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.796 EAL: Selected IOVA mode 'VA' 00:04:42.796 EAL: VFIO support initialized 00:04:42.796 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.796 EAL: Using IOMMU type 1 (Type 1) 00:04:42.796 EAL: Ignore mapping IO port bar(1) 00:04:42.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:43.055 EAL: Ignore mapping IO port bar(1) 00:04:43.055 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:43.055 EAL: Ignore mapping IO port bar(1) 00:04:43.055 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:43.055 EAL: Ignore mapping IO port bar(1) 00:04:43.055 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:43.055 EAL: Ignore mapping IO port bar(1) 00:04:43.055 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:43.055 EAL: Ignore mapping IO port bar(1) 00:04:43.055 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:43.055 EAL: Ignore mapping IO port bar(1) 00:04:43.055 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:43.055 EAL: Ignore mapping IO port bar(1) 00:04:43.055 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:43.624 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:43.882 EAL: Ignore mapping IO port bar(1) 00:04:43.882 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:43.882 EAL: Ignore mapping IO port bar(1) 00:04:43.882 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:43.882 EAL: Ignore mapping IO port bar(1) 00:04:43.882 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:43.882 EAL: Ignore mapping IO port bar(1) 00:04:43.882 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:43.882 EAL: Ignore mapping IO port bar(1) 00:04:43.882 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:43.882 EAL: Ignore mapping IO port bar(1) 00:04:43.882 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:43.882 EAL: Ignore mapping IO port bar(1) 00:04:43.882 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:43.882 EAL: Ignore mapping IO port bar(1) 00:04:43.882 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:48.074 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:48.074 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:48.074 Starting DPDK initialization... 00:04:48.074 Starting SPDK post initialization... 00:04:48.074 SPDK NVMe probe 00:04:48.074 Attaching to 0000:5e:00.0 00:04:48.074 Attached to 0000:5e:00.0 00:04:48.074 Cleaning up... 00:04:48.074 00:04:48.074 real 0m4.925s 00:04:48.074 user 0m3.490s 00:04:48.074 sys 0m0.508s 00:04:48.074 19:06:10 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.074 19:06:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.074 ************************************ 00:04:48.074 END TEST env_dpdk_post_init 00:04:48.074 ************************************ 00:04:48.074 19:06:10 env -- env/env.sh@26 -- # uname 00:04:48.074 19:06:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:48.074 19:06:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.074 19:06:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.074 19:06:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.074 19:06:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.074 ************************************ 00:04:48.074 START TEST env_mem_callbacks 00:04:48.074 ************************************ 00:04:48.074 19:06:10 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.074 EAL: Detected CPU lcores: 96 00:04:48.074 EAL: Detected NUMA nodes: 2 00:04:48.074 EAL: Detected shared linkage of DPDK 00:04:48.074 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.074 EAL: Selected IOVA mode 'VA' 00:04:48.074 EAL: VFIO support initialized 00:04:48.074 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.074 00:04:48.074 00:04:48.074 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.074 http://cunit.sourceforge.net/ 00:04:48.074 00:04:48.074 00:04:48.074 Suite: memory 00:04:48.074 Test: test ... 00:04:48.074 register 0x200000200000 2097152 00:04:48.074 malloc 3145728 00:04:48.074 register 0x200000400000 4194304 00:04:48.074 buf 0x200000500000 len 3145728 PASSED 00:04:48.074 malloc 64 00:04:48.074 buf 0x2000004fff40 len 64 PASSED 00:04:48.074 malloc 4194304 00:04:48.074 register 0x200000800000 6291456 00:04:48.074 buf 0x200000a00000 len 4194304 PASSED 00:04:48.074 free 0x200000500000 3145728 00:04:48.074 free 0x2000004fff40 64 00:04:48.074 unregister 0x200000400000 4194304 PASSED 00:04:48.074 free 0x200000a00000 4194304 00:04:48.074 unregister 0x200000800000 6291456 PASSED 00:04:48.074 malloc 8388608 00:04:48.074 register 0x200000400000 10485760 00:04:48.074 buf 0x200000600000 len 8388608 PASSED 00:04:48.074 free 0x200000600000 8388608 00:04:48.074 unregister 0x200000400000 10485760 PASSED 00:04:48.074 passed 00:04:48.074 00:04:48.074 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.074 suites 1 1 n/a 0 0 00:04:48.074 tests 1 1 1 0 0 00:04:48.074 asserts 15 15 15 0 n/a 00:04:48.074 00:04:48.074 Elapsed time = 0.008 seconds 00:04:48.074 00:04:48.074 real 0m0.060s 00:04:48.074 user 0m0.021s 00:04:48.074 sys 0m0.039s 00:04:48.075 19:06:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.075 19:06:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:48.075 ************************************ 00:04:48.075 END TEST env_mem_callbacks 00:04:48.075 ************************************ 00:04:48.075 00:04:48.075 real 0m6.813s 00:04:48.075 user 0m4.550s 00:04:48.075 sys 0m1.350s 00:04:48.075 19:06:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.075 19:06:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.075 ************************************ 00:04:48.075 END TEST env 00:04:48.075 ************************************ 00:04:48.075 19:06:10 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:48.075 19:06:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.075 19:06:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.075 19:06:10 -- common/autotest_common.sh@10 -- # set +x 00:04:48.075 ************************************ 00:04:48.075 START TEST rpc 00:04:48.075 ************************************ 00:04:48.075 19:06:10 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:48.075 * Looking for test storage... 00:04:48.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.075 19:06:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.075 19:06:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.075 19:06:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.075 19:06:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.075 19:06:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.075 19:06:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.075 19:06:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.075 19:06:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.075 19:06:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.075 19:06:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.075 19:06:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.075 19:06:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.075 19:06:11 rpc -- scripts/common.sh@345 -- # : 1 00:04:48.075 19:06:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.075 19:06:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.075 19:06:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.075 19:06:11 rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.075 19:06:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.075 19:06:11 rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.075 19:06:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.075 19:06:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.075 19:06:11 rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.075 19:06:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.075 19:06:11 rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.075 19:06:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.075 19:06:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.075 19:06:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.075 19:06:11 rpc -- scripts/common.sh@368 -- # return 0 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.075 --rc genhtml_branch_coverage=1 00:04:48.075 --rc genhtml_function_coverage=1 00:04:48.075 --rc genhtml_legend=1 00:04:48.075 --rc geninfo_all_blocks=1 00:04:48.075 --rc geninfo_unexecuted_blocks=1 00:04:48.075 00:04:48.075 ' 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.075 --rc genhtml_branch_coverage=1 00:04:48.075 --rc genhtml_function_coverage=1 00:04:48.075 --rc genhtml_legend=1 00:04:48.075 --rc geninfo_all_blocks=1 00:04:48.075 --rc geninfo_unexecuted_blocks=1 00:04:48.075 00:04:48.075 ' 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.075 --rc genhtml_branch_coverage=1 00:04:48.075 --rc genhtml_function_coverage=1 00:04:48.075 --rc genhtml_legend=1 00:04:48.075 --rc geninfo_all_blocks=1 00:04:48.075 --rc geninfo_unexecuted_blocks=1 00:04:48.075 00:04:48.075 ' 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.075 --rc genhtml_branch_coverage=1 00:04:48.075 --rc genhtml_function_coverage=1 00:04:48.075 --rc genhtml_legend=1 00:04:48.075 --rc geninfo_all_blocks=1 00:04:48.075 --rc geninfo_unexecuted_blocks=1 00:04:48.075 00:04:48.075 ' 00:04:48.075 19:06:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3546723 00:04:48.075 19:06:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:48.075 19:06:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.075 19:06:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3546723 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 3546723 ']' 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.075 19:06:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.075 [2024-11-26 19:06:11.172100] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:04:48.075 [2024-11-26 19:06:11.172143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546723 ] 00:04:48.333 [2024-11-26 19:06:11.246904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.333 [2024-11-26 19:06:11.290385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:48.333 [2024-11-26 19:06:11.290417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3546723' to capture a snapshot of events at runtime. 00:04:48.333 [2024-11-26 19:06:11.290427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:48.333 [2024-11-26 19:06:11.290433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:48.333 [2024-11-26 19:06:11.290438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3546723 for offline analysis/debug. 00:04:48.333 [2024-11-26 19:06:11.290977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.901 19:06:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.901 19:06:11 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.901 19:06:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.901 19:06:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.901 19:06:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:48.901 19:06:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:48.901 19:06:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.901 19:06:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.901 19:06:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.161 ************************************ 00:04:49.161 START TEST rpc_integrity 00:04:49.161 ************************************ 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.161 { 00:04:49.161 "name": "Malloc0", 00:04:49.161 "aliases": [ 00:04:49.161 "06145c3f-fcf1-44fa-8c0c-aed33dfcc782" 00:04:49.161 ], 00:04:49.161 "product_name": "Malloc disk", 00:04:49.161 "block_size": 512, 00:04:49.161 "num_blocks": 16384, 00:04:49.161 "uuid": "06145c3f-fcf1-44fa-8c0c-aed33dfcc782", 00:04:49.161 "assigned_rate_limits": { 00:04:49.161 "rw_ios_per_sec": 0, 00:04:49.161 "rw_mbytes_per_sec": 0, 00:04:49.161 "r_mbytes_per_sec": 0, 00:04:49.161 "w_mbytes_per_sec": 0 00:04:49.161 }, 00:04:49.161 "claimed": false, 00:04:49.161 "zoned": false, 00:04:49.161 "supported_io_types": { 00:04:49.161 "read": true, 00:04:49.161 "write": true, 00:04:49.161 "unmap": true, 00:04:49.161 "flush": true, 00:04:49.161 "reset": true, 00:04:49.161 "nvme_admin": false, 00:04:49.161 "nvme_io": false, 00:04:49.161 "nvme_io_md": false, 00:04:49.161 "write_zeroes": true, 00:04:49.161 "zcopy": true, 00:04:49.161 "get_zone_info": false, 00:04:49.161 "zone_management": false, 00:04:49.161 "zone_append": false, 00:04:49.161 "compare": false, 00:04:49.161 "compare_and_write": false, 00:04:49.161 "abort": true, 00:04:49.161 "seek_hole": false, 00:04:49.161 "seek_data": false, 00:04:49.161 "copy": true, 00:04:49.161 "nvme_iov_md": false 00:04:49.161 }, 00:04:49.161 "memory_domains": [ 00:04:49.161 { 00:04:49.161 "dma_device_id": "system", 00:04:49.161 "dma_device_type": 1 00:04:49.161 }, 00:04:49.161 { 00:04:49.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.161 "dma_device_type": 2 00:04:49.161 } 00:04:49.161 ], 00:04:49.161 "driver_specific": {} 00:04:49.161 } 00:04:49.161 ]' 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.161 [2024-11-26 19:06:12.168577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:49.161 [2024-11-26 19:06:12.168608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.161 [2024-11-26 19:06:12.168619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd16280 00:04:49.161 [2024-11-26 19:06:12.168625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.161 [2024-11-26 19:06:12.169714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.161 [2024-11-26 19:06:12.169735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.161 Passthru0 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.161 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.161 { 00:04:49.161 "name": "Malloc0", 00:04:49.161 "aliases": [ 00:04:49.161 "06145c3f-fcf1-44fa-8c0c-aed33dfcc782" 00:04:49.161 ], 00:04:49.161 "product_name": "Malloc disk", 00:04:49.161 "block_size": 512, 00:04:49.161 "num_blocks": 16384, 00:04:49.161 "uuid": "06145c3f-fcf1-44fa-8c0c-aed33dfcc782", 00:04:49.161 "assigned_rate_limits": { 00:04:49.161 "rw_ios_per_sec": 0, 00:04:49.161 "rw_mbytes_per_sec": 0, 00:04:49.161 "r_mbytes_per_sec": 0, 00:04:49.161 "w_mbytes_per_sec": 0 00:04:49.161 }, 00:04:49.161 "claimed": true, 00:04:49.161 "claim_type": "exclusive_write", 00:04:49.161 "zoned": false, 00:04:49.161 "supported_io_types": { 00:04:49.161 "read": true, 00:04:49.161 "write": true, 00:04:49.161 "unmap": true, 00:04:49.161 "flush": true, 00:04:49.161 "reset": true, 00:04:49.161 "nvme_admin": false, 00:04:49.161 "nvme_io": false, 00:04:49.161 "nvme_io_md": false, 00:04:49.161 "write_zeroes": true, 00:04:49.161 "zcopy": true, 00:04:49.161 "get_zone_info": false, 00:04:49.161 "zone_management": false, 00:04:49.161 "zone_append": false, 00:04:49.161 "compare": false, 00:04:49.161 "compare_and_write": false, 00:04:49.161 "abort": true, 00:04:49.161 "seek_hole": false, 00:04:49.161 "seek_data": false, 00:04:49.161 "copy": true, 00:04:49.161 "nvme_iov_md": false 00:04:49.161 }, 00:04:49.161 "memory_domains": [ 00:04:49.161 { 00:04:49.161 "dma_device_id": "system", 00:04:49.161 "dma_device_type": 1 00:04:49.161 }, 00:04:49.161 { 00:04:49.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.161 "dma_device_type": 2 00:04:49.161 } 00:04:49.161 ], 00:04:49.161 "driver_specific": {} 00:04:49.161 }, 00:04:49.161 { 00:04:49.161 "name": "Passthru0", 00:04:49.161 "aliases": [ 00:04:49.161 "6615baef-e895-57e4-bcf8-778b7b375691" 00:04:49.161 ], 00:04:49.161 "product_name": "passthru", 00:04:49.161 "block_size": 512, 00:04:49.161 "num_blocks": 16384, 00:04:49.161 "uuid": "6615baef-e895-57e4-bcf8-778b7b375691", 00:04:49.161 "assigned_rate_limits": { 00:04:49.161 "rw_ios_per_sec": 0, 00:04:49.161 "rw_mbytes_per_sec": 0, 00:04:49.161 "r_mbytes_per_sec": 0, 00:04:49.161 "w_mbytes_per_sec": 0 00:04:49.161 }, 00:04:49.161 "claimed": false, 00:04:49.161 "zoned": false, 00:04:49.161 "supported_io_types": { 00:04:49.161 "read": true, 00:04:49.161 "write": true, 00:04:49.161 "unmap": true, 00:04:49.161 "flush": true, 00:04:49.161 "reset": true, 00:04:49.161 "nvme_admin": false, 00:04:49.161 "nvme_io": false, 00:04:49.161 "nvme_io_md": false, 00:04:49.161 "write_zeroes": true, 00:04:49.161 "zcopy": true, 00:04:49.161 "get_zone_info": false, 00:04:49.161 "zone_management": false, 00:04:49.161 "zone_append": false, 00:04:49.161 "compare": false, 00:04:49.161 "compare_and_write": false, 00:04:49.161 "abort": true, 00:04:49.161 "seek_hole": false, 00:04:49.161 "seek_data": false, 00:04:49.161 "copy": true, 00:04:49.161 "nvme_iov_md": false 00:04:49.161 }, 00:04:49.161 "memory_domains": [ 00:04:49.161 { 00:04:49.161 "dma_device_id": "system", 00:04:49.161 "dma_device_type": 1 00:04:49.161 }, 00:04:49.161 { 00:04:49.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.161 "dma_device_type": 2 00:04:49.161 } 00:04:49.161 ], 00:04:49.161 "driver_specific": { 00:04:49.161 "passthru": { 00:04:49.161 "name": "Passthru0", 00:04:49.161 "base_bdev_name": "Malloc0" 00:04:49.161 } 00:04:49.161 } 00:04:49.161 } 00:04:49.161 ]' 00:04:49.161 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.162 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.162 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.162 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.162 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.162 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.162 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.162 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.162 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.162 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.162 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.162 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.162 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.421 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.421 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.422 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.422 19:06:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.422 00:04:49.422 real 0m0.284s 00:04:49.422 user 0m0.172s 00:04:49.422 sys 0m0.043s 00:04:49.422 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.422 19:06:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.422 ************************************ 00:04:49.422 END TEST rpc_integrity 00:04:49.422 ************************************ 00:04:49.422 19:06:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.422 19:06:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.422 19:06:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.422 19:06:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.422 ************************************ 00:04:49.422 START TEST rpc_plugins 00:04:49.422 ************************************ 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.422 { 00:04:49.422 "name": "Malloc1", 00:04:49.422 "aliases": [ 00:04:49.422 "73a163ad-c976-42e3-8e8c-915bd5f1291e" 00:04:49.422 ], 00:04:49.422 "product_name": "Malloc disk", 00:04:49.422 "block_size": 4096, 00:04:49.422 "num_blocks": 256, 00:04:49.422 "uuid": "73a163ad-c976-42e3-8e8c-915bd5f1291e", 00:04:49.422 "assigned_rate_limits": { 00:04:49.422 "rw_ios_per_sec": 0, 00:04:49.422 "rw_mbytes_per_sec": 0, 00:04:49.422 "r_mbytes_per_sec": 0, 00:04:49.422 "w_mbytes_per_sec": 0 00:04:49.422 }, 00:04:49.422 "claimed": false, 00:04:49.422 "zoned": false, 00:04:49.422 "supported_io_types": { 00:04:49.422 "read": true, 00:04:49.422 "write": true, 00:04:49.422 "unmap": true, 00:04:49.422 "flush": true, 00:04:49.422 "reset": true, 00:04:49.422 "nvme_admin": false, 00:04:49.422 "nvme_io": false, 00:04:49.422 "nvme_io_md": false, 00:04:49.422 "write_zeroes": true, 00:04:49.422 "zcopy": true, 00:04:49.422 "get_zone_info": false, 00:04:49.422 "zone_management": false, 00:04:49.422 "zone_append": false, 00:04:49.422 "compare": false, 00:04:49.422 "compare_and_write": false, 00:04:49.422 "abort": true, 00:04:49.422 "seek_hole": false, 00:04:49.422 "seek_data": false, 00:04:49.422 "copy": true, 00:04:49.422 "nvme_iov_md": false 00:04:49.422 }, 00:04:49.422 "memory_domains": [ 00:04:49.422 { 00:04:49.422 "dma_device_id": "system", 00:04:49.422 "dma_device_type": 1 00:04:49.422 }, 00:04:49.422 { 00:04:49.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.422 "dma_device_type": 2 00:04:49.422 } 00:04:49.422 ], 00:04:49.422 "driver_specific": {} 00:04:49.422 } 00:04:49.422 ]' 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:49.422 19:06:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:49.422 00:04:49.422 real 0m0.143s 00:04:49.422 user 0m0.084s 00:04:49.422 sys 0m0.023s 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.422 19:06:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.422 ************************************ 00:04:49.422 END TEST rpc_plugins 00:04:49.422 ************************************ 00:04:49.681 19:06:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:49.681 19:06:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.681 19:06:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.681 19:06:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.681 ************************************ 00:04:49.681 START TEST rpc_trace_cmd_test 00:04:49.681 ************************************ 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:49.681 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3546723", 00:04:49.681 "tpoint_group_mask": "0x8", 00:04:49.681 "iscsi_conn": { 00:04:49.681 "mask": "0x2", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "scsi": { 00:04:49.681 "mask": "0x4", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "bdev": { 00:04:49.681 "mask": "0x8", 00:04:49.681 "tpoint_mask": "0xffffffffffffffff" 00:04:49.681 }, 00:04:49.681 "nvmf_rdma": { 00:04:49.681 "mask": "0x10", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "nvmf_tcp": { 00:04:49.681 "mask": "0x20", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "ftl": { 00:04:49.681 "mask": "0x40", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "blobfs": { 00:04:49.681 "mask": "0x80", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "dsa": { 00:04:49.681 "mask": "0x200", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "thread": { 00:04:49.681 "mask": "0x400", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "nvme_pcie": { 00:04:49.681 "mask": "0x800", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "iaa": { 00:04:49.681 "mask": "0x1000", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "nvme_tcp": { 00:04:49.681 "mask": "0x2000", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "bdev_nvme": { 00:04:49.681 "mask": "0x4000", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "sock": { 00:04:49.681 "mask": "0x8000", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "blob": { 00:04:49.681 "mask": "0x10000", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "bdev_raid": { 00:04:49.681 "mask": "0x20000", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 }, 00:04:49.681 "scheduler": { 00:04:49.681 "mask": "0x40000", 00:04:49.681 "tpoint_mask": "0x0" 00:04:49.681 } 00:04:49.681 }' 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:49.681 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:49.940 19:06:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:49.940 00:04:49.940 real 0m0.223s 00:04:49.940 user 0m0.191s 00:04:49.940 sys 0m0.025s 00:04:49.940 19:06:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.940 19:06:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.940 ************************************ 00:04:49.940 END TEST rpc_trace_cmd_test 00:04:49.940 ************************************ 00:04:49.940 19:06:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:49.940 19:06:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:49.940 19:06:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:49.940 19:06:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.940 19:06:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.940 19:06:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.940 ************************************ 00:04:49.940 START TEST rpc_daemon_integrity 00:04:49.940 ************************************ 00:04:49.940 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:49.940 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.940 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.940 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.940 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.940 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.940 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.940 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.941 { 00:04:49.941 "name": "Malloc2", 00:04:49.941 "aliases": [ 00:04:49.941 "d20a0132-1363-4032-b869-6432728a52b2" 00:04:49.941 ], 00:04:49.941 "product_name": "Malloc disk", 00:04:49.941 "block_size": 512, 00:04:49.941 "num_blocks": 16384, 00:04:49.941 "uuid": "d20a0132-1363-4032-b869-6432728a52b2", 00:04:49.941 "assigned_rate_limits": { 00:04:49.941 "rw_ios_per_sec": 0, 00:04:49.941 "rw_mbytes_per_sec": 0, 00:04:49.941 "r_mbytes_per_sec": 0, 00:04:49.941 "w_mbytes_per_sec": 0 00:04:49.941 }, 00:04:49.941 "claimed": false, 00:04:49.941 "zoned": false, 00:04:49.941 "supported_io_types": { 00:04:49.941 "read": true, 00:04:49.941 "write": true, 00:04:49.941 "unmap": true, 00:04:49.941 "flush": true, 00:04:49.941 "reset": true, 00:04:49.941 "nvme_admin": false, 00:04:49.941 "nvme_io": false, 00:04:49.941 "nvme_io_md": false, 00:04:49.941 "write_zeroes": true, 00:04:49.941 "zcopy": true, 00:04:49.941 "get_zone_info": false, 00:04:49.941 "zone_management": false, 00:04:49.941 "zone_append": false, 00:04:49.941 "compare": false, 00:04:49.941 "compare_and_write": false, 00:04:49.941 "abort": true, 00:04:49.941 "seek_hole": false, 00:04:49.941 "seek_data": false, 00:04:49.941 "copy": true, 00:04:49.941 "nvme_iov_md": false 00:04:49.941 }, 00:04:49.941 "memory_domains": [ 00:04:49.941 { 00:04:49.941 "dma_device_id": "system", 00:04:49.941 "dma_device_type": 1 00:04:49.941 }, 00:04:49.941 { 00:04:49.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.941 "dma_device_type": 2 00:04:49.941 } 00:04:49.941 ], 00:04:49.941 "driver_specific": {} 00:04:49.941 } 00:04:49.941 ]' 00:04:49.941 19:06:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.941 [2024-11-26 19:06:13.018881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:49.941 [2024-11-26 19:06:13.018912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.941 [2024-11-26 19:06:13.018923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd18150 00:04:49.941 [2024-11-26 19:06:13.018929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.941 [2024-11-26 19:06:13.019925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.941 [2024-11-26 19:06:13.019944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.941 Passthru0 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.941 { 00:04:49.941 "name": "Malloc2", 00:04:49.941 "aliases": [ 00:04:49.941 "d20a0132-1363-4032-b869-6432728a52b2" 00:04:49.941 ], 00:04:49.941 "product_name": "Malloc disk", 00:04:49.941 "block_size": 512, 00:04:49.941 "num_blocks": 16384, 00:04:49.941 "uuid": "d20a0132-1363-4032-b869-6432728a52b2", 00:04:49.941 "assigned_rate_limits": { 00:04:49.941 "rw_ios_per_sec": 0, 00:04:49.941 "rw_mbytes_per_sec": 0, 00:04:49.941 "r_mbytes_per_sec": 0, 00:04:49.941 "w_mbytes_per_sec": 0 00:04:49.941 }, 00:04:49.941 "claimed": true, 00:04:49.941 "claim_type": "exclusive_write", 00:04:49.941 "zoned": false, 00:04:49.941 "supported_io_types": { 00:04:49.941 "read": true, 00:04:49.941 "write": true, 00:04:49.941 "unmap": true, 00:04:49.941 "flush": true, 00:04:49.941 "reset": true, 00:04:49.941 "nvme_admin": false, 00:04:49.941 "nvme_io": false, 00:04:49.941 "nvme_io_md": false, 00:04:49.941 "write_zeroes": true, 00:04:49.941 "zcopy": true, 00:04:49.941 "get_zone_info": false, 00:04:49.941 "zone_management": false, 00:04:49.941 "zone_append": false, 00:04:49.941 "compare": false, 00:04:49.941 "compare_and_write": false, 00:04:49.941 "abort": true, 00:04:49.941 "seek_hole": false, 00:04:49.941 "seek_data": false, 00:04:49.941 "copy": true, 00:04:49.941 "nvme_iov_md": false 00:04:49.941 }, 00:04:49.941 "memory_domains": [ 00:04:49.941 { 00:04:49.941 "dma_device_id": "system", 00:04:49.941 "dma_device_type": 1 00:04:49.941 }, 00:04:49.941 { 00:04:49.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.941 "dma_device_type": 2 00:04:49.941 } 00:04:49.941 ], 00:04:49.941 "driver_specific": {} 00:04:49.941 }, 00:04:49.941 { 00:04:49.941 "name": "Passthru0", 00:04:49.941 "aliases": [ 00:04:49.941 "dc03364b-91e1-5a3a-a1fa-4fab54ac2090" 00:04:49.941 ], 00:04:49.941 "product_name": "passthru", 00:04:49.941 "block_size": 512, 00:04:49.941 "num_blocks": 16384, 00:04:49.941 "uuid": "dc03364b-91e1-5a3a-a1fa-4fab54ac2090", 00:04:49.941 "assigned_rate_limits": { 00:04:49.941 "rw_ios_per_sec": 0, 00:04:49.941 "rw_mbytes_per_sec": 0, 00:04:49.941 "r_mbytes_per_sec": 0, 00:04:49.941 "w_mbytes_per_sec": 0 00:04:49.941 }, 00:04:49.941 "claimed": false, 00:04:49.941 "zoned": false, 00:04:49.941 "supported_io_types": { 00:04:49.941 "read": true, 00:04:49.941 "write": true, 00:04:49.941 "unmap": true, 00:04:49.941 "flush": true, 00:04:49.941 "reset": true, 00:04:49.941 "nvme_admin": false, 00:04:49.941 "nvme_io": false, 00:04:49.941 "nvme_io_md": false, 00:04:49.941 "write_zeroes": true, 00:04:49.941 "zcopy": true, 00:04:49.941 "get_zone_info": false, 00:04:49.941 "zone_management": false, 00:04:49.941 "zone_append": false, 00:04:49.941 "compare": false, 00:04:49.941 "compare_and_write": false, 00:04:49.941 "abort": true, 00:04:49.941 "seek_hole": false, 00:04:49.941 "seek_data": false, 00:04:49.941 "copy": true, 00:04:49.941 "nvme_iov_md": false 00:04:49.941 }, 00:04:49.941 "memory_domains": [ 00:04:49.941 { 00:04:49.941 "dma_device_id": "system", 00:04:49.941 "dma_device_type": 1 00:04:49.941 }, 00:04:49.941 { 00:04:49.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.941 "dma_device_type": 2 00:04:49.941 } 00:04:49.941 ], 00:04:49.941 "driver_specific": { 00:04:49.941 "passthru": { 00:04:49.941 "name": "Passthru0", 00:04:49.941 "base_bdev_name": "Malloc2" 00:04:49.941 } 00:04:49.941 } 00:04:49.941 } 00:04:49.941 ]' 00:04:49.941 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.201 00:04:50.201 real 0m0.271s 00:04:50.201 user 0m0.164s 00:04:50.201 sys 0m0.038s 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.201 19:06:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.201 ************************************ 00:04:50.201 END TEST rpc_daemon_integrity 00:04:50.201 ************************************ 00:04:50.201 19:06:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:50.201 19:06:13 rpc -- rpc/rpc.sh@84 -- # killprocess 3546723 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@954 -- # '[' -z 3546723 ']' 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@958 -- # kill -0 3546723 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546723 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546723' 00:04:50.201 killing process with pid 3546723 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@973 -- # kill 3546723 00:04:50.201 19:06:13 rpc -- common/autotest_common.sh@978 -- # wait 3546723 00:04:50.460 00:04:50.460 real 0m2.605s 00:04:50.460 user 0m3.307s 00:04:50.460 sys 0m0.745s 00:04:50.460 19:06:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.460 19:06:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.460 ************************************ 00:04:50.460 END TEST rpc 00:04:50.460 ************************************ 00:04:50.719 19:06:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:50.719 19:06:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.719 19:06:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.719 19:06:13 -- common/autotest_common.sh@10 -- # set +x 00:04:50.719 ************************************ 00:04:50.719 START TEST skip_rpc 00:04:50.719 ************************************ 00:04:50.719 19:06:13 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:50.719 * Looking for test storage... 00:04:50.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.719 19:06:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.719 19:06:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.719 19:06:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.719 19:06:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.719 19:06:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:50.720 19:06:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.720 19:06:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.720 --rc genhtml_branch_coverage=1 00:04:50.720 --rc genhtml_function_coverage=1 00:04:50.720 --rc genhtml_legend=1 00:04:50.720 --rc geninfo_all_blocks=1 00:04:50.720 --rc geninfo_unexecuted_blocks=1 00:04:50.720 00:04:50.720 ' 00:04:50.720 19:06:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.720 --rc genhtml_branch_coverage=1 00:04:50.720 --rc genhtml_function_coverage=1 00:04:50.720 --rc genhtml_legend=1 00:04:50.720 --rc geninfo_all_blocks=1 00:04:50.720 --rc geninfo_unexecuted_blocks=1 00:04:50.720 00:04:50.720 ' 00:04:50.720 19:06:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.720 --rc genhtml_branch_coverage=1 00:04:50.720 --rc genhtml_function_coverage=1 00:04:50.720 --rc genhtml_legend=1 00:04:50.720 --rc geninfo_all_blocks=1 00:04:50.720 --rc geninfo_unexecuted_blocks=1 00:04:50.720 00:04:50.720 ' 00:04:50.720 19:06:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.720 --rc genhtml_branch_coverage=1 00:04:50.720 --rc genhtml_function_coverage=1 00:04:50.720 --rc genhtml_legend=1 00:04:50.720 --rc geninfo_all_blocks=1 00:04:50.720 --rc geninfo_unexecuted_blocks=1 00:04:50.720 00:04:50.720 ' 00:04:50.720 19:06:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.720 19:06:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.720 19:06:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:50.720 19:06:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.720 19:06:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.720 19:06:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.720 ************************************ 00:04:50.720 START TEST skip_rpc 00:04:50.720 ************************************ 00:04:50.720 19:06:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:50.720 19:06:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3547372 00:04:50.720 19:06:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.720 19:06:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:50.720 19:06:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:50.980 [2024-11-26 19:06:13.876953] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:04:50.980 [2024-11-26 19:06:13.876992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547372 ] 00:04:50.980 [2024-11-26 19:06:13.948802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.980 [2024-11-26 19:06:13.988364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.270 19:06:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.270 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3547372 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3547372 ']' 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3547372 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3547372 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3547372' 00:04:56.271 killing process with pid 3547372 00:04:56.271 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3547372 00:04:56.272 19:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3547372 00:04:56.272 00:04:56.272 real 0m5.362s 00:04:56.272 user 0m5.134s 00:04:56.272 sys 0m0.264s 00:04:56.272 19:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.272 19:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.272 ************************************ 00:04:56.272 END TEST skip_rpc 00:04:56.272 ************************************ 00:04:56.272 19:06:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.272 19:06:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.272 19:06:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.272 19:06:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.272 ************************************ 00:04:56.272 START TEST skip_rpc_with_json 00:04:56.272 ************************************ 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3548318 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3548318 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3548318 ']' 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.272 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.273 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.273 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.273 [2024-11-26 19:06:19.307716] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:04:56.273 [2024-11-26 19:06:19.307756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548318 ] 00:04:56.540 [2024-11-26 19:06:19.383514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.540 [2024-11-26 19:06:19.422729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.540 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.540 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:56.540 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:56.540 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.540 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.540 [2024-11-26 19:06:19.646616] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:56.540 request: 00:04:56.540 { 00:04:56.540 "trtype": "tcp", 00:04:56.540 "method": "nvmf_get_transports", 00:04:56.540 "req_id": 1 00:04:56.540 } 00:04:56.540 Got JSON-RPC error response 00:04:56.540 response: 00:04:56.540 { 00:04:56.540 "code": -19, 00:04:56.540 "message": "No such device" 00:04:56.540 } 00:04:56.540 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:56.540 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:56.540 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.800 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.800 [2024-11-26 19:06:19.658728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.800 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.800 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:56.800 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.800 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.800 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.800 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.800 { 00:04:56.800 "subsystems": [ 00:04:56.800 { 00:04:56.800 "subsystem": "fsdev", 00:04:56.800 "config": [ 00:04:56.800 { 00:04:56.800 "method": "fsdev_set_opts", 00:04:56.800 "params": { 00:04:56.800 "fsdev_io_pool_size": 65535, 00:04:56.800 "fsdev_io_cache_size": 256 00:04:56.800 } 00:04:56.800 } 00:04:56.800 ] 00:04:56.800 }, 00:04:56.800 { 00:04:56.800 "subsystem": "vfio_user_target", 00:04:56.800 "config": null 00:04:56.800 }, 00:04:56.800 { 00:04:56.801 "subsystem": "keyring", 00:04:56.801 "config": [] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "iobuf", 00:04:56.801 "config": [ 00:04:56.801 { 00:04:56.801 "method": "iobuf_set_options", 00:04:56.801 "params": { 00:04:56.801 "small_pool_count": 8192, 00:04:56.801 "large_pool_count": 1024, 00:04:56.801 "small_bufsize": 8192, 00:04:56.801 "large_bufsize": 135168, 00:04:56.801 "enable_numa": false 00:04:56.801 } 00:04:56.801 } 00:04:56.801 ] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "sock", 00:04:56.801 "config": [ 00:04:56.801 { 00:04:56.801 "method": "sock_set_default_impl", 00:04:56.801 "params": { 00:04:56.801 "impl_name": "posix" 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "sock_impl_set_options", 00:04:56.801 "params": { 00:04:56.801 "impl_name": "ssl", 00:04:56.801 "recv_buf_size": 4096, 00:04:56.801 "send_buf_size": 4096, 00:04:56.801 "enable_recv_pipe": true, 00:04:56.801 "enable_quickack": false, 00:04:56.801 "enable_placement_id": 0, 00:04:56.801 "enable_zerocopy_send_server": true, 00:04:56.801 "enable_zerocopy_send_client": false, 00:04:56.801 "zerocopy_threshold": 0, 00:04:56.801 "tls_version": 0, 00:04:56.801 "enable_ktls": false 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "sock_impl_set_options", 00:04:56.801 "params": { 00:04:56.801 "impl_name": "posix", 00:04:56.801 "recv_buf_size": 2097152, 00:04:56.801 "send_buf_size": 2097152, 00:04:56.801 "enable_recv_pipe": true, 00:04:56.801 "enable_quickack": false, 00:04:56.801 "enable_placement_id": 0, 00:04:56.801 "enable_zerocopy_send_server": true, 00:04:56.801 "enable_zerocopy_send_client": false, 00:04:56.801 "zerocopy_threshold": 0, 00:04:56.801 "tls_version": 0, 00:04:56.801 "enable_ktls": false 00:04:56.801 } 00:04:56.801 } 00:04:56.801 ] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "vmd", 00:04:56.801 "config": [] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "accel", 00:04:56.801 "config": [ 00:04:56.801 { 00:04:56.801 "method": "accel_set_options", 00:04:56.801 "params": { 00:04:56.801 "small_cache_size": 128, 00:04:56.801 "large_cache_size": 16, 00:04:56.801 "task_count": 2048, 00:04:56.801 "sequence_count": 2048, 00:04:56.801 "buf_count": 2048 00:04:56.801 } 00:04:56.801 } 00:04:56.801 ] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "bdev", 00:04:56.801 "config": [ 00:04:56.801 { 00:04:56.801 "method": "bdev_set_options", 00:04:56.801 "params": { 00:04:56.801 "bdev_io_pool_size": 65535, 00:04:56.801 "bdev_io_cache_size": 256, 00:04:56.801 "bdev_auto_examine": true, 00:04:56.801 "iobuf_small_cache_size": 128, 00:04:56.801 "iobuf_large_cache_size": 16 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "bdev_raid_set_options", 00:04:56.801 "params": { 00:04:56.801 "process_window_size_kb": 1024, 00:04:56.801 "process_max_bandwidth_mb_sec": 0 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "bdev_iscsi_set_options", 00:04:56.801 "params": { 00:04:56.801 "timeout_sec": 30 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "bdev_nvme_set_options", 00:04:56.801 "params": { 00:04:56.801 "action_on_timeout": "none", 00:04:56.801 "timeout_us": 0, 00:04:56.801 "timeout_admin_us": 0, 00:04:56.801 "keep_alive_timeout_ms": 10000, 00:04:56.801 "arbitration_burst": 0, 00:04:56.801 "low_priority_weight": 0, 00:04:56.801 "medium_priority_weight": 0, 00:04:56.801 "high_priority_weight": 0, 00:04:56.801 "nvme_adminq_poll_period_us": 10000, 00:04:56.801 "nvme_ioq_poll_period_us": 0, 00:04:56.801 "io_queue_requests": 0, 00:04:56.801 "delay_cmd_submit": true, 00:04:56.801 "transport_retry_count": 4, 00:04:56.801 "bdev_retry_count": 3, 00:04:56.801 "transport_ack_timeout": 0, 00:04:56.801 "ctrlr_loss_timeout_sec": 0, 00:04:56.801 "reconnect_delay_sec": 0, 00:04:56.801 "fast_io_fail_timeout_sec": 0, 00:04:56.801 "disable_auto_failback": false, 00:04:56.801 "generate_uuids": false, 00:04:56.801 "transport_tos": 0, 00:04:56.801 "nvme_error_stat": false, 00:04:56.801 "rdma_srq_size": 0, 00:04:56.801 "io_path_stat": false, 00:04:56.801 "allow_accel_sequence": false, 00:04:56.801 "rdma_max_cq_size": 0, 00:04:56.801 "rdma_cm_event_timeout_ms": 0, 00:04:56.801 "dhchap_digests": [ 00:04:56.801 "sha256", 00:04:56.801 "sha384", 00:04:56.801 "sha512" 00:04:56.801 ], 00:04:56.801 "dhchap_dhgroups": [ 00:04:56.801 "null", 00:04:56.801 "ffdhe2048", 00:04:56.801 "ffdhe3072", 00:04:56.801 "ffdhe4096", 00:04:56.801 "ffdhe6144", 00:04:56.801 "ffdhe8192" 00:04:56.801 ] 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "bdev_nvme_set_hotplug", 00:04:56.801 "params": { 00:04:56.801 "period_us": 100000, 00:04:56.801 "enable": false 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "bdev_wait_for_examine" 00:04:56.801 } 00:04:56.801 ] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "scsi", 00:04:56.801 "config": null 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "scheduler", 00:04:56.801 "config": [ 00:04:56.801 { 00:04:56.801 "method": "framework_set_scheduler", 00:04:56.801 "params": { 00:04:56.801 "name": "static" 00:04:56.801 } 00:04:56.801 } 00:04:56.801 ] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "vhost_scsi", 00:04:56.801 "config": [] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "vhost_blk", 00:04:56.801 "config": [] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "ublk", 00:04:56.801 "config": [] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "nbd", 00:04:56.801 "config": [] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "nvmf", 00:04:56.801 "config": [ 00:04:56.801 { 00:04:56.801 "method": "nvmf_set_config", 00:04:56.801 "params": { 00:04:56.801 "discovery_filter": "match_any", 00:04:56.801 "admin_cmd_passthru": { 00:04:56.801 "identify_ctrlr": false 00:04:56.801 }, 00:04:56.801 "dhchap_digests": [ 00:04:56.801 "sha256", 00:04:56.801 "sha384", 00:04:56.801 "sha512" 00:04:56.801 ], 00:04:56.801 "dhchap_dhgroups": [ 00:04:56.801 "null", 00:04:56.801 "ffdhe2048", 00:04:56.801 "ffdhe3072", 00:04:56.801 "ffdhe4096", 00:04:56.801 "ffdhe6144", 00:04:56.801 "ffdhe8192" 00:04:56.801 ] 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "nvmf_set_max_subsystems", 00:04:56.801 "params": { 00:04:56.801 "max_subsystems": 1024 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "nvmf_set_crdt", 00:04:56.801 "params": { 00:04:56.801 "crdt1": 0, 00:04:56.801 "crdt2": 0, 00:04:56.801 "crdt3": 0 00:04:56.801 } 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "method": "nvmf_create_transport", 00:04:56.801 "params": { 00:04:56.801 "trtype": "TCP", 00:04:56.801 "max_queue_depth": 128, 00:04:56.801 "max_io_qpairs_per_ctrlr": 127, 00:04:56.801 "in_capsule_data_size": 4096, 00:04:56.801 "max_io_size": 131072, 00:04:56.801 "io_unit_size": 131072, 00:04:56.801 "max_aq_depth": 128, 00:04:56.801 "num_shared_buffers": 511, 00:04:56.801 "buf_cache_size": 4294967295, 00:04:56.801 "dif_insert_or_strip": false, 00:04:56.801 "zcopy": false, 00:04:56.801 "c2h_success": true, 00:04:56.801 "sock_priority": 0, 00:04:56.801 "abort_timeout_sec": 1, 00:04:56.801 "ack_timeout": 0, 00:04:56.801 "data_wr_pool_size": 0 00:04:56.801 } 00:04:56.801 } 00:04:56.801 ] 00:04:56.801 }, 00:04:56.801 { 00:04:56.801 "subsystem": "iscsi", 00:04:56.801 "config": [ 00:04:56.801 { 00:04:56.801 "method": "iscsi_set_options", 00:04:56.801 "params": { 00:04:56.801 "node_base": "iqn.2016-06.io.spdk", 00:04:56.801 "max_sessions": 128, 00:04:56.801 "max_connections_per_session": 2, 00:04:56.801 "max_queue_depth": 64, 00:04:56.801 "default_time2wait": 2, 00:04:56.801 "default_time2retain": 20, 00:04:56.801 "first_burst_length": 8192, 00:04:56.801 "immediate_data": true, 00:04:56.801 "allow_duplicated_isid": false, 00:04:56.801 "error_recovery_level": 0, 00:04:56.801 "nop_timeout": 60, 00:04:56.801 "nop_in_interval": 30, 00:04:56.801 "disable_chap": false, 00:04:56.801 "require_chap": false, 00:04:56.801 "mutual_chap": false, 00:04:56.801 "chap_group": 0, 00:04:56.801 "max_large_datain_per_connection": 64, 00:04:56.801 "max_r2t_per_connection": 4, 00:04:56.801 "pdu_pool_size": 36864, 00:04:56.801 "immediate_data_pool_size": 16384, 00:04:56.801 "data_out_pool_size": 2048 00:04:56.801 } 00:04:56.801 } 00:04:56.801 ] 00:04:56.801 } 00:04:56.801 ] 00:04:56.801 } 00:04:56.801 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3548318 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3548318 ']' 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3548318 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3548318 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3548318' 00:04:56.802 killing process with pid 3548318 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3548318 00:04:56.802 19:06:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3548318 00:04:57.370 19:06:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.370 19:06:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3548421 00:04:57.370 19:06:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3548421 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3548421 ']' 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3548421 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3548421 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3548421' 00:05:02.642 killing process with pid 3548421 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3548421 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3548421 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:02.642 00:05:02.642 real 0m6.292s 00:05:02.642 user 0m6.003s 00:05:02.642 sys 0m0.583s 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.642 ************************************ 00:05:02.642 END TEST skip_rpc_with_json 00:05:02.642 ************************************ 00:05:02.642 19:06:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:02.642 19:06:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.642 19:06:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.642 19:06:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.642 ************************************ 00:05:02.642 START TEST skip_rpc_with_delay 00:05:02.642 ************************************ 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.642 [2024-11-26 19:06:25.671474] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.642 00:05:02.642 real 0m0.069s 00:05:02.642 user 0m0.037s 00:05:02.642 sys 0m0.031s 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.642 19:06:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:02.642 ************************************ 00:05:02.642 END TEST skip_rpc_with_delay 00:05:02.642 ************************************ 00:05:02.642 19:06:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:02.642 19:06:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:02.642 19:06:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:02.642 19:06:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.642 19:06:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.642 19:06:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.912 ************************************ 00:05:02.912 START TEST exit_on_failed_rpc_init 00:05:02.912 ************************************ 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3549431 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3549431 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3549431 ']' 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.912 19:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.912 [2024-11-26 19:06:25.804267] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:02.912 [2024-11-26 19:06:25.804308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549431 ] 00:05:02.912 [2024-11-26 19:06:25.880565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.912 [2024-11-26 19:06:25.923560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:03.178 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.178 [2024-11-26 19:06:26.185019] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:03.178 [2024-11-26 19:06:26.185068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549537 ] 00:05:03.178 [2024-11-26 19:06:26.256241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.437 [2024-11-26 19:06:26.297978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.437 [2024-11-26 19:06:26.298029] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:03.437 [2024-11-26 19:06:26.298038] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:03.437 [2024-11-26 19:06:26.298046] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.437 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:03.437 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.437 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:03.437 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:03.437 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:03.437 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3549431 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3549431 ']' 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3549431 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3549431 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3549431' 00:05:03.438 killing process with pid 3549431 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3549431 00:05:03.438 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3549431 00:05:03.697 00:05:03.697 real 0m0.935s 00:05:03.697 user 0m1.011s 00:05:03.697 sys 0m0.371s 00:05:03.697 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.697 19:06:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.697 ************************************ 00:05:03.697 END TEST exit_on_failed_rpc_init 00:05:03.697 ************************************ 00:05:03.697 19:06:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.697 00:05:03.697 real 0m13.113s 00:05:03.697 user 0m12.397s 00:05:03.697 sys 0m1.525s 00:05:03.697 19:06:26 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.697 19:06:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.697 ************************************ 00:05:03.697 END TEST skip_rpc 00:05:03.697 ************************************ 00:05:03.697 19:06:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:03.697 19:06:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.697 19:06:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.697 19:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:03.697 ************************************ 00:05:03.697 START TEST rpc_client 00:05:03.697 ************************************ 00:05:03.697 19:06:26 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:03.957 * Looking for test storage... 00:05:03.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.957 19:06:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.957 --rc genhtml_branch_coverage=1 00:05:03.957 --rc genhtml_function_coverage=1 00:05:03.957 --rc genhtml_legend=1 00:05:03.957 --rc geninfo_all_blocks=1 00:05:03.957 --rc geninfo_unexecuted_blocks=1 00:05:03.957 00:05:03.957 ' 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.957 --rc genhtml_branch_coverage=1 00:05:03.957 --rc genhtml_function_coverage=1 00:05:03.957 --rc genhtml_legend=1 00:05:03.957 --rc geninfo_all_blocks=1 00:05:03.957 --rc geninfo_unexecuted_blocks=1 00:05:03.957 00:05:03.957 ' 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.957 --rc genhtml_branch_coverage=1 00:05:03.957 --rc genhtml_function_coverage=1 00:05:03.957 --rc genhtml_legend=1 00:05:03.957 --rc geninfo_all_blocks=1 00:05:03.957 --rc geninfo_unexecuted_blocks=1 00:05:03.957 00:05:03.957 ' 00:05:03.957 19:06:26 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.957 --rc genhtml_branch_coverage=1 00:05:03.957 --rc genhtml_function_coverage=1 00:05:03.957 --rc genhtml_legend=1 00:05:03.957 --rc geninfo_all_blocks=1 00:05:03.957 --rc geninfo_unexecuted_blocks=1 00:05:03.957 00:05:03.957 ' 00:05:03.957 19:06:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:03.957 OK 00:05:03.957 19:06:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.957 00:05:03.957 real 0m0.200s 00:05:03.957 user 0m0.122s 00:05:03.957 sys 0m0.091s 00:05:03.957 19:06:27 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.957 19:06:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.957 ************************************ 00:05:03.957 END TEST rpc_client 00:05:03.957 ************************************ 00:05:03.957 19:06:27 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:03.957 19:06:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.957 19:06:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.957 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:04.218 ************************************ 00:05:04.218 START TEST json_config 00:05:04.218 ************************************ 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.218 19:06:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.218 19:06:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.218 19:06:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.218 19:06:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.218 19:06:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.218 19:06:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.218 19:06:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.218 19:06:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.218 19:06:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.218 19:06:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.218 19:06:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.218 19:06:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:04.218 19:06:27 json_config -- scripts/common.sh@345 -- # : 1 00:05:04.218 19:06:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.218 19:06:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.218 19:06:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:04.218 19:06:27 json_config -- scripts/common.sh@353 -- # local d=1 00:05:04.218 19:06:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.218 19:06:27 json_config -- scripts/common.sh@355 -- # echo 1 00:05:04.218 19:06:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.218 19:06:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:04.218 19:06:27 json_config -- scripts/common.sh@353 -- # local d=2 00:05:04.218 19:06:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.218 19:06:27 json_config -- scripts/common.sh@355 -- # echo 2 00:05:04.218 19:06:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.218 19:06:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.218 19:06:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.218 19:06:27 json_config -- scripts/common.sh@368 -- # return 0 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.218 --rc genhtml_branch_coverage=1 00:05:04.218 --rc genhtml_function_coverage=1 00:05:04.218 --rc genhtml_legend=1 00:05:04.218 --rc geninfo_all_blocks=1 00:05:04.218 --rc geninfo_unexecuted_blocks=1 00:05:04.218 00:05:04.218 ' 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.218 --rc genhtml_branch_coverage=1 00:05:04.218 --rc genhtml_function_coverage=1 00:05:04.218 --rc genhtml_legend=1 00:05:04.218 --rc geninfo_all_blocks=1 00:05:04.218 --rc geninfo_unexecuted_blocks=1 00:05:04.218 00:05:04.218 ' 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.218 --rc genhtml_branch_coverage=1 00:05:04.218 --rc genhtml_function_coverage=1 00:05:04.218 --rc genhtml_legend=1 00:05:04.218 --rc geninfo_all_blocks=1 00:05:04.218 --rc geninfo_unexecuted_blocks=1 00:05:04.218 00:05:04.218 ' 00:05:04.218 19:06:27 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.218 --rc genhtml_branch_coverage=1 00:05:04.218 --rc genhtml_function_coverage=1 00:05:04.218 --rc genhtml_legend=1 00:05:04.218 --rc geninfo_all_blocks=1 00:05:04.218 --rc geninfo_unexecuted_blocks=1 00:05:04.218 00:05:04.218 ' 00:05:04.218 19:06:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.218 19:06:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.218 19:06:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.218 19:06:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.218 19:06:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.218 19:06:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.218 19:06:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.218 19:06:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.218 19:06:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:04.218 19:06:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@51 -- # : 0 00:05:04.218 19:06:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.219 19:06:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.219 19:06:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.219 19:06:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.219 19:06:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.219 19:06:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.219 19:06:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.219 19:06:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.219 19:06:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:04.219 INFO: JSON configuration test init 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.219 19:06:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:04.219 19:06:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.219 19:06:27 json_config -- json_config/common.sh@10 -- # shift 00:05:04.219 19:06:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.219 19:06:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.219 19:06:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.219 19:06:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.219 19:06:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.219 19:06:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3549879 00:05:04.219 19:06:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.219 Waiting for target to run... 00:05:04.219 19:06:27 json_config -- json_config/common.sh@25 -- # waitforlisten 3549879 /var/tmp/spdk_tgt.sock 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 3549879 ']' 00:05:04.219 19:06:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.219 19:06:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.219 [2024-11-26 19:06:27.318468] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:04.219 [2024-11-26 19:06:27.318519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549879 ] 00:05:04.787 [2024-11-26 19:06:27.766232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.787 [2024-11-26 19:06:27.823272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.046 19:06:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.046 19:06:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:05.046 19:06:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:05.046 00:05:05.046 19:06:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:05.046 19:06:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:05.046 19:06:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.046 19:06:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.046 19:06:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:05.046 19:06:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:05.046 19:06:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.046 19:06:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.305 19:06:28 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:05.305 19:06:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:05.305 19:06:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:08.596 19:06:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.596 19:06:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:08.596 19:06:31 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:08.596 19:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@54 -- # sort 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:08.597 19:06:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.597 19:06:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:08.597 19:06:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.597 19:06:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:08.597 19:06:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.597 19:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.856 MallocForNvmf0 00:05:08.856 19:06:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.856 19:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.856 MallocForNvmf1 00:05:08.856 19:06:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.856 19:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.115 [2024-11-26 19:06:32.079830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.115 19:06:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.115 19:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.381 19:06:32 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.381 19:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.381 19:06:32 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.381 19:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.643 19:06:32 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.643 19:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.903 [2024-11-26 19:06:32.810158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.903 19:06:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:09.903 19:06:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.903 19:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.903 19:06:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:09.903 19:06:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.903 19:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.903 19:06:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:09.903 19:06:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.903 19:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.163 MallocBdevForConfigChangeCheck 00:05:10.163 19:06:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:10.163 19:06:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.163 19:06:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.163 19:06:33 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:10.163 19:06:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.422 19:06:33 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:10.422 INFO: shutting down applications... 00:05:10.422 19:06:33 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:10.422 19:06:33 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:10.422 19:06:33 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:10.422 19:06:33 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:12.954 Calling clear_iscsi_subsystem 00:05:12.954 Calling clear_nvmf_subsystem 00:05:12.954 Calling clear_nbd_subsystem 00:05:12.954 Calling clear_ublk_subsystem 00:05:12.954 Calling clear_vhost_blk_subsystem 00:05:12.954 Calling clear_vhost_scsi_subsystem 00:05:12.954 Calling clear_bdev_subsystem 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@352 -- # break 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:12.954 19:06:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:12.954 19:06:35 json_config -- json_config/common.sh@31 -- # local app=target 00:05:12.954 19:06:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.954 19:06:35 json_config -- json_config/common.sh@35 -- # [[ -n 3549879 ]] 00:05:12.954 19:06:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3549879 00:05:12.954 19:06:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.954 19:06:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.954 19:06:35 json_config -- json_config/common.sh@41 -- # kill -0 3549879 00:05:12.954 19:06:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.521 19:06:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.521 19:06:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.521 19:06:36 json_config -- json_config/common.sh@41 -- # kill -0 3549879 00:05:13.521 19:06:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.521 19:06:36 json_config -- json_config/common.sh@43 -- # break 00:05:13.521 19:06:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.521 19:06:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.521 SPDK target shutdown done 00:05:13.521 19:06:36 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:13.521 INFO: relaunching applications... 00:05:13.521 19:06:36 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.521 19:06:36 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.521 19:06:36 json_config -- json_config/common.sh@10 -- # shift 00:05:13.521 19:06:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.521 19:06:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.521 19:06:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.521 19:06:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.521 19:06:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.521 19:06:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3551414 00:05:13.521 19:06:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.521 Waiting for target to run... 00:05:13.521 19:06:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.521 19:06:36 json_config -- json_config/common.sh@25 -- # waitforlisten 3551414 /var/tmp/spdk_tgt.sock 00:05:13.521 19:06:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 3551414 ']' 00:05:13.521 19:06:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.521 19:06:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.521 19:06:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.521 19:06:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.521 19:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 [2024-11-26 19:06:36.461780] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:13.521 [2024-11-26 19:06:36.461832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551414 ] 00:05:14.089 [2024-11-26 19:06:36.928796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.089 [2024-11-26 19:06:36.975945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.375 [2024-11-26 19:06:40.009746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.375 [2024-11-26 19:06:40.042049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:17.634 19:06:40 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.634 19:06:40 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:17.634 19:06:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:17.634 00:05:17.634 19:06:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:17.634 19:06:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:17.634 INFO: Checking if target configuration is the same... 00:05:17.634 19:06:40 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.634 19:06:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:17.634 19:06:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.634 + '[' 2 -ne 2 ']' 00:05:17.634 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:17.634 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:17.634 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:17.634 +++ basename /dev/fd/62 00:05:17.634 ++ mktemp /tmp/62.XXX 00:05:17.634 + tmp_file_1=/tmp/62.HcV 00:05:17.634 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.634 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:17.634 + tmp_file_2=/tmp/spdk_tgt_config.json.TC0 00:05:17.634 + ret=0 00:05:17.634 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.203 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.203 + diff -u /tmp/62.HcV /tmp/spdk_tgt_config.json.TC0 00:05:18.203 + echo 'INFO: JSON config files are the same' 00:05:18.203 INFO: JSON config files are the same 00:05:18.203 + rm /tmp/62.HcV /tmp/spdk_tgt_config.json.TC0 00:05:18.203 + exit 0 00:05:18.203 19:06:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:18.203 19:06:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.203 INFO: changing configuration and checking if this can be detected... 00:05:18.203 19:06:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.203 19:06:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.203 19:06:41 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.203 19:06:41 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:18.203 19:06:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.203 + '[' 2 -ne 2 ']' 00:05:18.203 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.203 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.203 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.203 +++ basename /dev/fd/62 00:05:18.203 ++ mktemp /tmp/62.XXX 00:05:18.203 + tmp_file_1=/tmp/62.iDW 00:05:18.203 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.203 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.203 + tmp_file_2=/tmp/spdk_tgt_config.json.ubk 00:05:18.203 + ret=0 00:05:18.203 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.770 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.770 + diff -u /tmp/62.iDW /tmp/spdk_tgt_config.json.ubk 00:05:18.770 + ret=1 00:05:18.770 + echo '=== Start of file: /tmp/62.iDW ===' 00:05:18.770 + cat /tmp/62.iDW 00:05:18.770 + echo '=== End of file: /tmp/62.iDW ===' 00:05:18.770 + echo '' 00:05:18.770 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ubk ===' 00:05:18.770 + cat /tmp/spdk_tgt_config.json.ubk 00:05:18.770 + echo '=== End of file: /tmp/spdk_tgt_config.json.ubk ===' 00:05:18.770 + echo '' 00:05:18.770 + rm /tmp/62.iDW /tmp/spdk_tgt_config.json.ubk 00:05:18.770 + exit 1 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:18.770 INFO: configuration change detected. 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@324 -- # [[ -n 3551414 ]] 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.770 19:06:41 json_config -- json_config/json_config.sh@330 -- # killprocess 3551414 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@954 -- # '[' -z 3551414 ']' 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@958 -- # kill -0 3551414 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@959 -- # uname 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3551414 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3551414' 00:05:18.770 killing process with pid 3551414 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@973 -- # kill 3551414 00:05:18.770 19:06:41 json_config -- common/autotest_common.sh@978 -- # wait 3551414 00:05:20.675 19:06:43 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.934 19:06:43 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:20.934 19:06:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.934 19:06:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.934 19:06:43 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:20.934 19:06:43 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:20.934 INFO: Success 00:05:20.934 00:05:20.934 real 0m16.751s 00:05:20.934 user 0m17.060s 00:05:20.934 sys 0m2.749s 00:05:20.934 19:06:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.934 19:06:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.934 ************************************ 00:05:20.934 END TEST json_config 00:05:20.934 ************************************ 00:05:20.934 19:06:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:20.934 19:06:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.934 19:06:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.934 19:06:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.934 ************************************ 00:05:20.934 START TEST json_config_extra_key 00:05:20.934 ************************************ 00:05:20.934 19:06:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:20.934 19:06:43 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.934 19:06:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.934 19:06:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.934 19:06:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.934 19:06:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.194 19:06:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.194 --rc genhtml_branch_coverage=1 00:05:21.194 --rc genhtml_function_coverage=1 00:05:21.194 --rc genhtml_legend=1 00:05:21.194 --rc geninfo_all_blocks=1 00:05:21.194 --rc geninfo_unexecuted_blocks=1 00:05:21.194 00:05:21.194 ' 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.194 --rc genhtml_branch_coverage=1 00:05:21.194 --rc genhtml_function_coverage=1 00:05:21.194 --rc genhtml_legend=1 00:05:21.194 --rc geninfo_all_blocks=1 00:05:21.194 --rc geninfo_unexecuted_blocks=1 00:05:21.194 00:05:21.194 ' 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.194 --rc genhtml_branch_coverage=1 00:05:21.194 --rc genhtml_function_coverage=1 00:05:21.194 --rc genhtml_legend=1 00:05:21.194 --rc geninfo_all_blocks=1 00:05:21.194 --rc geninfo_unexecuted_blocks=1 00:05:21.194 00:05:21.194 ' 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.194 --rc genhtml_branch_coverage=1 00:05:21.194 --rc genhtml_function_coverage=1 00:05:21.194 --rc genhtml_legend=1 00:05:21.194 --rc geninfo_all_blocks=1 00:05:21.194 --rc geninfo_unexecuted_blocks=1 00:05:21.194 00:05:21.194 ' 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.194 19:06:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.194 19:06:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.194 19:06:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.194 19:06:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.194 19:06:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.194 19:06:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.194 19:06:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.194 19:06:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:21.194 19:06:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.194 19:06:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:21.194 INFO: launching applications... 00:05:21.194 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3552911 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.194 Waiting for target to run... 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3552911 /var/tmp/spdk_tgt.sock 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3552911 ']' 00:05:21.194 19:06:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.194 19:06:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.194 [2024-11-26 19:06:44.133754] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:21.194 [2024-11-26 19:06:44.133798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3552911 ] 00:05:21.774 [2024-11-26 19:06:44.585005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.774 [2024-11-26 19:06:44.643800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.033 19:06:44 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.033 19:06:44 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:22.033 00:05:22.033 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:22.033 INFO: shutting down applications... 00:05:22.033 19:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3552911 ]] 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3552911 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3552911 00:05:22.033 19:06:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.600 19:06:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.600 19:06:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.600 19:06:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3552911 00:05:22.600 19:06:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.600 19:06:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:22.600 19:06:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.600 19:06:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.600 SPDK target shutdown done 00:05:22.600 19:06:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:22.600 Success 00:05:22.600 00:05:22.600 real 0m1.583s 00:05:22.601 user 0m1.213s 00:05:22.601 sys 0m0.564s 00:05:22.601 19:06:45 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.601 19:06:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.601 ************************************ 00:05:22.601 END TEST json_config_extra_key 00:05:22.601 ************************************ 00:05:22.601 19:06:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.601 19:06:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.601 19:06:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.601 19:06:45 -- common/autotest_common.sh@10 -- # set +x 00:05:22.601 ************************************ 00:05:22.601 START TEST alias_rpc 00:05:22.601 ************************************ 00:05:22.601 19:06:45 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.601 * Looking for test storage... 00:05:22.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:22.601 19:06:45 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.601 19:06:45 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.601 19:06:45 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.601 19:06:45 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.601 19:06:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.601 19:06:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.601 19:06:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.601 19:06:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.601 19:06:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.860 19:06:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.860 --rc genhtml_branch_coverage=1 00:05:22.860 --rc genhtml_function_coverage=1 00:05:22.860 --rc genhtml_legend=1 00:05:22.860 --rc geninfo_all_blocks=1 00:05:22.860 --rc geninfo_unexecuted_blocks=1 00:05:22.860 00:05:22.860 ' 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.860 --rc genhtml_branch_coverage=1 00:05:22.860 --rc genhtml_function_coverage=1 00:05:22.860 --rc genhtml_legend=1 00:05:22.860 --rc geninfo_all_blocks=1 00:05:22.860 --rc geninfo_unexecuted_blocks=1 00:05:22.860 00:05:22.860 ' 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.860 --rc genhtml_branch_coverage=1 00:05:22.860 --rc genhtml_function_coverage=1 00:05:22.860 --rc genhtml_legend=1 00:05:22.860 --rc geninfo_all_blocks=1 00:05:22.860 --rc geninfo_unexecuted_blocks=1 00:05:22.860 00:05:22.860 ' 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.860 --rc genhtml_branch_coverage=1 00:05:22.860 --rc genhtml_function_coverage=1 00:05:22.860 --rc genhtml_legend=1 00:05:22.860 --rc geninfo_all_blocks=1 00:05:22.860 --rc geninfo_unexecuted_blocks=1 00:05:22.860 00:05:22.860 ' 00:05:22.860 19:06:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.860 19:06:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3553205 00:05:22.860 19:06:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3553205 00:05:22.860 19:06:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3553205 ']' 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.860 19:06:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.860 [2024-11-26 19:06:45.783095] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:22.860 [2024-11-26 19:06:45.783142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3553205 ] 00:05:22.860 [2024-11-26 19:06:45.856717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.860 [2024-11-26 19:06:45.895956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.796 19:06:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:23.796 19:06:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3553205 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3553205 ']' 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3553205 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3553205 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3553205' 00:05:23.796 killing process with pid 3553205 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@973 -- # kill 3553205 00:05:23.796 19:06:46 alias_rpc -- common/autotest_common.sh@978 -- # wait 3553205 00:05:24.364 00:05:24.364 real 0m1.631s 00:05:24.364 user 0m1.797s 00:05:24.364 sys 0m0.443s 00:05:24.364 19:06:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.364 19:06:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.364 ************************************ 00:05:24.364 END TEST alias_rpc 00:05:24.364 ************************************ 00:05:24.364 19:06:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:24.364 19:06:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.364 19:06:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.364 19:06:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.364 19:06:47 -- common/autotest_common.sh@10 -- # set +x 00:05:24.364 ************************************ 00:05:24.364 START TEST spdkcli_tcp 00:05:24.364 ************************************ 00:05:24.364 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.364 * Looking for test storage... 00:05:24.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:24.364 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.364 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.364 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.364 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.364 19:06:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:24.364 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.364 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.364 --rc genhtml_branch_coverage=1 00:05:24.364 --rc genhtml_function_coverage=1 00:05:24.364 --rc genhtml_legend=1 00:05:24.364 --rc geninfo_all_blocks=1 00:05:24.364 --rc geninfo_unexecuted_blocks=1 00:05:24.364 00:05:24.364 ' 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.365 --rc genhtml_branch_coverage=1 00:05:24.365 --rc genhtml_function_coverage=1 00:05:24.365 --rc genhtml_legend=1 00:05:24.365 --rc geninfo_all_blocks=1 00:05:24.365 --rc geninfo_unexecuted_blocks=1 00:05:24.365 00:05:24.365 ' 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.365 --rc genhtml_branch_coverage=1 00:05:24.365 --rc genhtml_function_coverage=1 00:05:24.365 --rc genhtml_legend=1 00:05:24.365 --rc geninfo_all_blocks=1 00:05:24.365 --rc geninfo_unexecuted_blocks=1 00:05:24.365 00:05:24.365 ' 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.365 --rc genhtml_branch_coverage=1 00:05:24.365 --rc genhtml_function_coverage=1 00:05:24.365 --rc genhtml_legend=1 00:05:24.365 --rc geninfo_all_blocks=1 00:05:24.365 --rc geninfo_unexecuted_blocks=1 00:05:24.365 00:05:24.365 ' 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3553502 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3553502 00:05:24.365 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3553502 ']' 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.365 19:06:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.623 [2024-11-26 19:06:47.482433] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:24.624 [2024-11-26 19:06:47.482480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3553502 ] 00:05:24.624 [2024-11-26 19:06:47.556885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.624 [2024-11-26 19:06:47.600316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.624 [2024-11-26 19:06:47.600318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.882 19:06:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.882 19:06:47 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:24.882 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3553675 00:05:24.882 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.882 19:06:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:25.142 [ 00:05:25.142 "bdev_malloc_delete", 00:05:25.142 "bdev_malloc_create", 00:05:25.142 "bdev_null_resize", 00:05:25.142 "bdev_null_delete", 00:05:25.142 "bdev_null_create", 00:05:25.142 "bdev_nvme_cuse_unregister", 00:05:25.142 "bdev_nvme_cuse_register", 00:05:25.142 "bdev_opal_new_user", 00:05:25.142 "bdev_opal_set_lock_state", 00:05:25.142 "bdev_opal_delete", 00:05:25.142 "bdev_opal_get_info", 00:05:25.142 "bdev_opal_create", 00:05:25.142 "bdev_nvme_opal_revert", 00:05:25.142 "bdev_nvme_opal_init", 00:05:25.142 "bdev_nvme_send_cmd", 00:05:25.142 "bdev_nvme_set_keys", 00:05:25.142 "bdev_nvme_get_path_iostat", 00:05:25.142 "bdev_nvme_get_mdns_discovery_info", 00:05:25.142 "bdev_nvme_stop_mdns_discovery", 00:05:25.142 "bdev_nvme_start_mdns_discovery", 00:05:25.142 "bdev_nvme_set_multipath_policy", 00:05:25.142 "bdev_nvme_set_preferred_path", 00:05:25.142 "bdev_nvme_get_io_paths", 00:05:25.142 "bdev_nvme_remove_error_injection", 00:05:25.142 "bdev_nvme_add_error_injection", 00:05:25.142 "bdev_nvme_get_discovery_info", 00:05:25.143 "bdev_nvme_stop_discovery", 00:05:25.143 "bdev_nvme_start_discovery", 00:05:25.143 "bdev_nvme_get_controller_health_info", 00:05:25.143 "bdev_nvme_disable_controller", 00:05:25.143 "bdev_nvme_enable_controller", 00:05:25.143 "bdev_nvme_reset_controller", 00:05:25.143 "bdev_nvme_get_transport_statistics", 00:05:25.143 "bdev_nvme_apply_firmware", 00:05:25.143 "bdev_nvme_detach_controller", 00:05:25.143 "bdev_nvme_get_controllers", 00:05:25.143 "bdev_nvme_attach_controller", 00:05:25.143 "bdev_nvme_set_hotplug", 00:05:25.143 "bdev_nvme_set_options", 00:05:25.143 "bdev_passthru_delete", 00:05:25.143 "bdev_passthru_create", 00:05:25.143 "bdev_lvol_set_parent_bdev", 00:05:25.143 "bdev_lvol_set_parent", 00:05:25.143 "bdev_lvol_check_shallow_copy", 00:05:25.143 "bdev_lvol_start_shallow_copy", 00:05:25.143 "bdev_lvol_grow_lvstore", 00:05:25.143 "bdev_lvol_get_lvols", 00:05:25.143 "bdev_lvol_get_lvstores", 00:05:25.143 "bdev_lvol_delete", 00:05:25.143 "bdev_lvol_set_read_only", 00:05:25.143 "bdev_lvol_resize", 00:05:25.143 "bdev_lvol_decouple_parent", 00:05:25.143 "bdev_lvol_inflate", 00:05:25.143 "bdev_lvol_rename", 00:05:25.143 "bdev_lvol_clone_bdev", 00:05:25.143 "bdev_lvol_clone", 00:05:25.143 "bdev_lvol_snapshot", 00:05:25.143 "bdev_lvol_create", 00:05:25.143 "bdev_lvol_delete_lvstore", 00:05:25.143 "bdev_lvol_rename_lvstore", 00:05:25.143 "bdev_lvol_create_lvstore", 00:05:25.143 "bdev_raid_set_options", 00:05:25.143 "bdev_raid_remove_base_bdev", 00:05:25.143 "bdev_raid_add_base_bdev", 00:05:25.143 "bdev_raid_delete", 00:05:25.143 "bdev_raid_create", 00:05:25.143 "bdev_raid_get_bdevs", 00:05:25.143 "bdev_error_inject_error", 00:05:25.143 "bdev_error_delete", 00:05:25.143 "bdev_error_create", 00:05:25.143 "bdev_split_delete", 00:05:25.143 "bdev_split_create", 00:05:25.143 "bdev_delay_delete", 00:05:25.143 "bdev_delay_create", 00:05:25.143 "bdev_delay_update_latency", 00:05:25.143 "bdev_zone_block_delete", 00:05:25.143 "bdev_zone_block_create", 00:05:25.143 "blobfs_create", 00:05:25.143 "blobfs_detect", 00:05:25.143 "blobfs_set_cache_size", 00:05:25.143 "bdev_aio_delete", 00:05:25.143 "bdev_aio_rescan", 00:05:25.143 "bdev_aio_create", 00:05:25.143 "bdev_ftl_set_property", 00:05:25.143 "bdev_ftl_get_properties", 00:05:25.143 "bdev_ftl_get_stats", 00:05:25.143 "bdev_ftl_unmap", 00:05:25.143 "bdev_ftl_unload", 00:05:25.143 "bdev_ftl_delete", 00:05:25.143 "bdev_ftl_load", 00:05:25.143 "bdev_ftl_create", 00:05:25.143 "bdev_virtio_attach_controller", 00:05:25.143 "bdev_virtio_scsi_get_devices", 00:05:25.143 "bdev_virtio_detach_controller", 00:05:25.143 "bdev_virtio_blk_set_hotplug", 00:05:25.143 "bdev_iscsi_delete", 00:05:25.143 "bdev_iscsi_create", 00:05:25.143 "bdev_iscsi_set_options", 00:05:25.143 "accel_error_inject_error", 00:05:25.143 "ioat_scan_accel_module", 00:05:25.143 "dsa_scan_accel_module", 00:05:25.143 "iaa_scan_accel_module", 00:05:25.143 "vfu_virtio_create_fs_endpoint", 00:05:25.143 "vfu_virtio_create_scsi_endpoint", 00:05:25.143 "vfu_virtio_scsi_remove_target", 00:05:25.143 "vfu_virtio_scsi_add_target", 00:05:25.143 "vfu_virtio_create_blk_endpoint", 00:05:25.143 "vfu_virtio_delete_endpoint", 00:05:25.143 "keyring_file_remove_key", 00:05:25.143 "keyring_file_add_key", 00:05:25.143 "keyring_linux_set_options", 00:05:25.143 "fsdev_aio_delete", 00:05:25.143 "fsdev_aio_create", 00:05:25.143 "iscsi_get_histogram", 00:05:25.143 "iscsi_enable_histogram", 00:05:25.143 "iscsi_set_options", 00:05:25.143 "iscsi_get_auth_groups", 00:05:25.143 "iscsi_auth_group_remove_secret", 00:05:25.143 "iscsi_auth_group_add_secret", 00:05:25.143 "iscsi_delete_auth_group", 00:05:25.143 "iscsi_create_auth_group", 00:05:25.143 "iscsi_set_discovery_auth", 00:05:25.143 "iscsi_get_options", 00:05:25.143 "iscsi_target_node_request_logout", 00:05:25.143 "iscsi_target_node_set_redirect", 00:05:25.143 "iscsi_target_node_set_auth", 00:05:25.143 "iscsi_target_node_add_lun", 00:05:25.143 "iscsi_get_stats", 00:05:25.143 "iscsi_get_connections", 00:05:25.143 "iscsi_portal_group_set_auth", 00:05:25.143 "iscsi_start_portal_group", 00:05:25.143 "iscsi_delete_portal_group", 00:05:25.143 "iscsi_create_portal_group", 00:05:25.143 "iscsi_get_portal_groups", 00:05:25.143 "iscsi_delete_target_node", 00:05:25.143 "iscsi_target_node_remove_pg_ig_maps", 00:05:25.143 "iscsi_target_node_add_pg_ig_maps", 00:05:25.143 "iscsi_create_target_node", 00:05:25.143 "iscsi_get_target_nodes", 00:05:25.143 "iscsi_delete_initiator_group", 00:05:25.143 "iscsi_initiator_group_remove_initiators", 00:05:25.143 "iscsi_initiator_group_add_initiators", 00:05:25.143 "iscsi_create_initiator_group", 00:05:25.143 "iscsi_get_initiator_groups", 00:05:25.143 "nvmf_set_crdt", 00:05:25.143 "nvmf_set_config", 00:05:25.143 "nvmf_set_max_subsystems", 00:05:25.143 "nvmf_stop_mdns_prr", 00:05:25.143 "nvmf_publish_mdns_prr", 00:05:25.143 "nvmf_subsystem_get_listeners", 00:05:25.143 "nvmf_subsystem_get_qpairs", 00:05:25.143 "nvmf_subsystem_get_controllers", 00:05:25.143 "nvmf_get_stats", 00:05:25.143 "nvmf_get_transports", 00:05:25.143 "nvmf_create_transport", 00:05:25.143 "nvmf_get_targets", 00:05:25.143 "nvmf_delete_target", 00:05:25.143 "nvmf_create_target", 00:05:25.143 "nvmf_subsystem_allow_any_host", 00:05:25.143 "nvmf_subsystem_set_keys", 00:05:25.143 "nvmf_subsystem_remove_host", 00:05:25.143 "nvmf_subsystem_add_host", 00:05:25.143 "nvmf_ns_remove_host", 00:05:25.143 "nvmf_ns_add_host", 00:05:25.143 "nvmf_subsystem_remove_ns", 00:05:25.143 "nvmf_subsystem_set_ns_ana_group", 00:05:25.143 "nvmf_subsystem_add_ns", 00:05:25.143 "nvmf_subsystem_listener_set_ana_state", 00:05:25.143 "nvmf_discovery_get_referrals", 00:05:25.143 "nvmf_discovery_remove_referral", 00:05:25.143 "nvmf_discovery_add_referral", 00:05:25.143 "nvmf_subsystem_remove_listener", 00:05:25.143 "nvmf_subsystem_add_listener", 00:05:25.143 "nvmf_delete_subsystem", 00:05:25.143 "nvmf_create_subsystem", 00:05:25.143 "nvmf_get_subsystems", 00:05:25.143 "env_dpdk_get_mem_stats", 00:05:25.143 "nbd_get_disks", 00:05:25.143 "nbd_stop_disk", 00:05:25.143 "nbd_start_disk", 00:05:25.143 "ublk_recover_disk", 00:05:25.143 "ublk_get_disks", 00:05:25.143 "ublk_stop_disk", 00:05:25.143 "ublk_start_disk", 00:05:25.143 "ublk_destroy_target", 00:05:25.143 "ublk_create_target", 00:05:25.143 "virtio_blk_create_transport", 00:05:25.143 "virtio_blk_get_transports", 00:05:25.143 "vhost_controller_set_coalescing", 00:05:25.143 "vhost_get_controllers", 00:05:25.143 "vhost_delete_controller", 00:05:25.143 "vhost_create_blk_controller", 00:05:25.143 "vhost_scsi_controller_remove_target", 00:05:25.143 "vhost_scsi_controller_add_target", 00:05:25.143 "vhost_start_scsi_controller", 00:05:25.143 "vhost_create_scsi_controller", 00:05:25.143 "thread_set_cpumask", 00:05:25.143 "scheduler_set_options", 00:05:25.143 "framework_get_governor", 00:05:25.143 "framework_get_scheduler", 00:05:25.143 "framework_set_scheduler", 00:05:25.143 "framework_get_reactors", 00:05:25.143 "thread_get_io_channels", 00:05:25.143 "thread_get_pollers", 00:05:25.143 "thread_get_stats", 00:05:25.143 "framework_monitor_context_switch", 00:05:25.143 "spdk_kill_instance", 00:05:25.143 "log_enable_timestamps", 00:05:25.143 "log_get_flags", 00:05:25.143 "log_clear_flag", 00:05:25.143 "log_set_flag", 00:05:25.143 "log_get_level", 00:05:25.143 "log_set_level", 00:05:25.143 "log_get_print_level", 00:05:25.143 "log_set_print_level", 00:05:25.143 "framework_enable_cpumask_locks", 00:05:25.143 "framework_disable_cpumask_locks", 00:05:25.143 "framework_wait_init", 00:05:25.143 "framework_start_init", 00:05:25.143 "scsi_get_devices", 00:05:25.143 "bdev_get_histogram", 00:05:25.143 "bdev_enable_histogram", 00:05:25.143 "bdev_set_qos_limit", 00:05:25.143 "bdev_set_qd_sampling_period", 00:05:25.143 "bdev_get_bdevs", 00:05:25.143 "bdev_reset_iostat", 00:05:25.143 "bdev_get_iostat", 00:05:25.143 "bdev_examine", 00:05:25.143 "bdev_wait_for_examine", 00:05:25.143 "bdev_set_options", 00:05:25.143 "accel_get_stats", 00:05:25.143 "accel_set_options", 00:05:25.143 "accel_set_driver", 00:05:25.143 "accel_crypto_key_destroy", 00:05:25.143 "accel_crypto_keys_get", 00:05:25.143 "accel_crypto_key_create", 00:05:25.143 "accel_assign_opc", 00:05:25.143 "accel_get_module_info", 00:05:25.143 "accel_get_opc_assignments", 00:05:25.143 "vmd_rescan", 00:05:25.143 "vmd_remove_device", 00:05:25.143 "vmd_enable", 00:05:25.143 "sock_get_default_impl", 00:05:25.143 "sock_set_default_impl", 00:05:25.143 "sock_impl_set_options", 00:05:25.143 "sock_impl_get_options", 00:05:25.143 "iobuf_get_stats", 00:05:25.143 "iobuf_set_options", 00:05:25.143 "keyring_get_keys", 00:05:25.143 "vfu_tgt_set_base_path", 00:05:25.143 "framework_get_pci_devices", 00:05:25.143 "framework_get_config", 00:05:25.143 "framework_get_subsystems", 00:05:25.143 "fsdev_set_opts", 00:05:25.143 "fsdev_get_opts", 00:05:25.143 "trace_get_info", 00:05:25.143 "trace_get_tpoint_group_mask", 00:05:25.143 "trace_disable_tpoint_group", 00:05:25.143 "trace_enable_tpoint_group", 00:05:25.143 "trace_clear_tpoint_mask", 00:05:25.143 "trace_set_tpoint_mask", 00:05:25.143 "notify_get_notifications", 00:05:25.143 "notify_get_types", 00:05:25.143 "spdk_get_version", 00:05:25.143 "rpc_get_methods" 00:05:25.143 ] 00:05:25.143 19:06:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.144 19:06:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:25.144 19:06:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3553502 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3553502 ']' 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3553502 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3553502 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3553502' 00:05:25.144 killing process with pid 3553502 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3553502 00:05:25.144 19:06:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3553502 00:05:25.403 00:05:25.403 real 0m1.155s 00:05:25.403 user 0m1.938s 00:05:25.403 sys 0m0.446s 00:05:25.403 19:06:48 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.403 19:06:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.403 ************************************ 00:05:25.403 END TEST spdkcli_tcp 00:05:25.403 ************************************ 00:05:25.403 19:06:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.403 19:06:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.403 19:06:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.403 19:06:48 -- common/autotest_common.sh@10 -- # set +x 00:05:25.403 ************************************ 00:05:25.403 START TEST dpdk_mem_utility 00:05:25.403 ************************************ 00:05:25.403 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.663 * Looking for test storage... 00:05:25.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.663 19:06:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.663 --rc genhtml_branch_coverage=1 00:05:25.663 --rc genhtml_function_coverage=1 00:05:25.663 --rc genhtml_legend=1 00:05:25.663 --rc geninfo_all_blocks=1 00:05:25.663 --rc geninfo_unexecuted_blocks=1 00:05:25.663 00:05:25.663 ' 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.663 --rc genhtml_branch_coverage=1 00:05:25.663 --rc genhtml_function_coverage=1 00:05:25.663 --rc genhtml_legend=1 00:05:25.663 --rc geninfo_all_blocks=1 00:05:25.663 --rc geninfo_unexecuted_blocks=1 00:05:25.663 00:05:25.663 ' 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.663 --rc genhtml_branch_coverage=1 00:05:25.663 --rc genhtml_function_coverage=1 00:05:25.663 --rc genhtml_legend=1 00:05:25.663 --rc geninfo_all_blocks=1 00:05:25.663 --rc geninfo_unexecuted_blocks=1 00:05:25.663 00:05:25.663 ' 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.663 --rc genhtml_branch_coverage=1 00:05:25.663 --rc genhtml_function_coverage=1 00:05:25.663 --rc genhtml_legend=1 00:05:25.663 --rc geninfo_all_blocks=1 00:05:25.663 --rc geninfo_unexecuted_blocks=1 00:05:25.663 00:05:25.663 ' 00:05:25.663 19:06:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:25.663 19:06:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3553808 00:05:25.663 19:06:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3553808 00:05:25.663 19:06:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3553808 ']' 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.663 19:06:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.663 [2024-11-26 19:06:48.701326] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:25.663 [2024-11-26 19:06:48.701374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3553808 ] 00:05:25.923 [2024-11-26 19:06:48.775598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.923 [2024-11-26 19:06:48.816218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.184 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.184 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:26.184 19:06:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:26.184 19:06:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:26.184 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.184 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.184 { 00:05:26.184 "filename": "/tmp/spdk_mem_dump.txt" 00:05:26.184 } 00:05:26.184 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.184 19:06:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.184 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:26.184 1 heaps totaling size 818.000000 MiB 00:05:26.184 size: 818.000000 MiB heap id: 0 00:05:26.184 end heaps---------- 00:05:26.184 9 mempools totaling size 603.782043 MiB 00:05:26.184 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:26.184 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:26.184 size: 100.555481 MiB name: bdev_io_3553808 00:05:26.184 size: 50.003479 MiB name: msgpool_3553808 00:05:26.184 size: 36.509338 MiB name: fsdev_io_3553808 00:05:26.184 size: 21.763794 MiB name: PDU_Pool 00:05:26.184 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:26.184 size: 4.133484 MiB name: evtpool_3553808 00:05:26.184 size: 0.026123 MiB name: Session_Pool 00:05:26.184 end mempools------- 00:05:26.184 6 memzones totaling size 4.142822 MiB 00:05:26.184 size: 1.000366 MiB name: RG_ring_0_3553808 00:05:26.184 size: 1.000366 MiB name: RG_ring_1_3553808 00:05:26.184 size: 1.000366 MiB name: RG_ring_4_3553808 00:05:26.184 size: 1.000366 MiB name: RG_ring_5_3553808 00:05:26.184 size: 0.125366 MiB name: RG_ring_2_3553808 00:05:26.184 size: 0.015991 MiB name: RG_ring_3_3553808 00:05:26.184 end memzones------- 00:05:26.184 19:06:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:26.184 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:26.184 list of free elements. size: 10.852478 MiB 00:05:26.184 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:26.184 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:26.184 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:26.184 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:26.184 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:26.184 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:26.184 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:26.184 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:26.184 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:26.184 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:26.184 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:26.184 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:26.184 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:26.184 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:26.184 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:26.184 list of standard malloc elements. size: 199.218628 MiB 00:05:26.184 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:26.184 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:26.184 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:26.184 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:26.184 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:26.184 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:26.184 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:26.184 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:26.184 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:26.184 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:26.184 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:26.184 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:26.184 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:26.184 list of memzone associated elements. size: 607.928894 MiB 00:05:26.184 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:26.184 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:26.184 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:26.184 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:26.184 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:26.184 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3553808_0 00:05:26.184 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:26.184 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3553808_0 00:05:26.184 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:26.184 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3553808_0 00:05:26.184 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:26.184 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:26.184 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:26.184 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:26.184 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:26.184 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3553808_0 00:05:26.184 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:26.184 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3553808 00:05:26.184 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:26.184 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3553808 00:05:26.184 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:26.184 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:26.184 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:26.184 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:26.184 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:26.184 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:26.184 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:26.184 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:26.184 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:26.185 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3553808 00:05:26.185 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:26.185 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3553808 00:05:26.185 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:26.185 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3553808 00:05:26.185 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:26.185 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3553808 00:05:26.185 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:26.185 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3553808 00:05:26.185 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:26.185 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3553808 00:05:26.185 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:26.185 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:26.185 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:26.185 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:26.185 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:26.185 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:26.185 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:26.185 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3553808 00:05:26.185 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:26.185 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3553808 00:05:26.185 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:26.185 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:26.185 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:26.185 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:26.185 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:26.185 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3553808 00:05:26.185 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:26.185 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:26.185 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:26.185 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3553808 00:05:26.185 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:26.185 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3553808 00:05:26.185 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:26.185 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3553808 00:05:26.185 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:26.185 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:26.185 19:06:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:26.185 19:06:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3553808 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3553808 ']' 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3553808 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3553808 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3553808' 00:05:26.185 killing process with pid 3553808 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3553808 00:05:26.185 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3553808 00:05:26.444 00:05:26.444 real 0m1.035s 00:05:26.444 user 0m0.983s 00:05:26.444 sys 0m0.403s 00:05:26.444 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.444 19:06:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.444 ************************************ 00:05:26.444 END TEST dpdk_mem_utility 00:05:26.444 ************************************ 00:05:26.444 19:06:49 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:26.444 19:06:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.444 19:06:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.444 19:06:49 -- common/autotest_common.sh@10 -- # set +x 00:05:26.703 ************************************ 00:05:26.703 START TEST event 00:05:26.703 ************************************ 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:26.703 * Looking for test storage... 00:05:26.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.703 19:06:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.703 19:06:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.703 19:06:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.703 19:06:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.703 19:06:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.703 19:06:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.703 19:06:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.703 19:06:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.703 19:06:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.703 19:06:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.703 19:06:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.703 19:06:49 event -- scripts/common.sh@344 -- # case "$op" in 00:05:26.703 19:06:49 event -- scripts/common.sh@345 -- # : 1 00:05:26.703 19:06:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.703 19:06:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.703 19:06:49 event -- scripts/common.sh@365 -- # decimal 1 00:05:26.703 19:06:49 event -- scripts/common.sh@353 -- # local d=1 00:05:26.703 19:06:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.703 19:06:49 event -- scripts/common.sh@355 -- # echo 1 00:05:26.703 19:06:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.703 19:06:49 event -- scripts/common.sh@366 -- # decimal 2 00:05:26.703 19:06:49 event -- scripts/common.sh@353 -- # local d=2 00:05:26.703 19:06:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.703 19:06:49 event -- scripts/common.sh@355 -- # echo 2 00:05:26.703 19:06:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.703 19:06:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.703 19:06:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.703 19:06:49 event -- scripts/common.sh@368 -- # return 0 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.703 --rc genhtml_branch_coverage=1 00:05:26.703 --rc genhtml_function_coverage=1 00:05:26.703 --rc genhtml_legend=1 00:05:26.703 --rc geninfo_all_blocks=1 00:05:26.703 --rc geninfo_unexecuted_blocks=1 00:05:26.703 00:05:26.703 ' 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.703 --rc genhtml_branch_coverage=1 00:05:26.703 --rc genhtml_function_coverage=1 00:05:26.703 --rc genhtml_legend=1 00:05:26.703 --rc geninfo_all_blocks=1 00:05:26.703 --rc geninfo_unexecuted_blocks=1 00:05:26.703 00:05:26.703 ' 00:05:26.703 19:06:49 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.704 --rc genhtml_branch_coverage=1 00:05:26.704 --rc genhtml_function_coverage=1 00:05:26.704 --rc genhtml_legend=1 00:05:26.704 --rc geninfo_all_blocks=1 00:05:26.704 --rc geninfo_unexecuted_blocks=1 00:05:26.704 00:05:26.704 ' 00:05:26.704 19:06:49 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.704 --rc genhtml_branch_coverage=1 00:05:26.704 --rc genhtml_function_coverage=1 00:05:26.704 --rc genhtml_legend=1 00:05:26.704 --rc geninfo_all_blocks=1 00:05:26.704 --rc geninfo_unexecuted_blocks=1 00:05:26.704 00:05:26.704 ' 00:05:26.704 19:06:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:26.704 19:06:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.704 19:06:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.704 19:06:49 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:26.704 19:06:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.704 19:06:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.704 ************************************ 00:05:26.704 START TEST event_perf 00:05:26.704 ************************************ 00:05:26.704 19:06:49 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.704 Running I/O for 1 seconds...[2024-11-26 19:06:49.804901] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:26.704 [2024-11-26 19:06:49.804969] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554100 ] 00:05:26.963 [2024-11-26 19:06:49.884248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.963 [2024-11-26 19:06:49.927684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.963 [2024-11-26 19:06:49.927780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.963 [2024-11-26 19:06:49.927813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.963 [2024-11-26 19:06:49.927815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.900 Running I/O for 1 seconds... 00:05:27.900 lcore 0: 205989 00:05:27.900 lcore 1: 205988 00:05:27.900 lcore 2: 205986 00:05:27.900 lcore 3: 205987 00:05:27.900 done. 00:05:27.900 00:05:27.900 real 0m1.189s 00:05:27.900 user 0m4.112s 00:05:27.900 sys 0m0.073s 00:05:27.900 19:06:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.900 19:06:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.900 ************************************ 00:05:27.900 END TEST event_perf 00:05:27.900 ************************************ 00:05:27.900 19:06:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.900 19:06:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:27.900 19:06:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.900 19:06:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.160 ************************************ 00:05:28.160 START TEST event_reactor 00:05:28.160 ************************************ 00:05:28.160 19:06:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:28.160 [2024-11-26 19:06:51.063472] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:28.160 [2024-11-26 19:06:51.063528] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554353 ] 00:05:28.160 [2024-11-26 19:06:51.140574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.160 [2024-11-26 19:06:51.180134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.540 test_start 00:05:29.540 oneshot 00:05:29.540 tick 100 00:05:29.540 tick 100 00:05:29.540 tick 250 00:05:29.540 tick 100 00:05:29.540 tick 100 00:05:29.540 tick 250 00:05:29.540 tick 100 00:05:29.540 tick 500 00:05:29.540 tick 100 00:05:29.540 tick 100 00:05:29.540 tick 250 00:05:29.540 tick 100 00:05:29.540 tick 100 00:05:29.540 test_end 00:05:29.540 00:05:29.540 real 0m1.175s 00:05:29.540 user 0m1.089s 00:05:29.540 sys 0m0.082s 00:05:29.540 19:06:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.540 19:06:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.540 ************************************ 00:05:29.540 END TEST event_reactor 00:05:29.540 ************************************ 00:05:29.540 19:06:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.540 19:06:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:29.540 19:06:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.540 19:06:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.540 ************************************ 00:05:29.540 START TEST event_reactor_perf 00:05:29.540 ************************************ 00:05:29.540 19:06:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.540 [2024-11-26 19:06:52.308206] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:29.540 [2024-11-26 19:06:52.308275] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554600 ] 00:05:29.540 [2024-11-26 19:06:52.388445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.540 [2024-11-26 19:06:52.427519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.479 test_start 00:05:30.479 test_end 00:05:30.479 Performance: 514828 events per second 00:05:30.479 00:05:30.479 real 0m1.178s 00:05:30.479 user 0m1.094s 00:05:30.479 sys 0m0.080s 00:05:30.479 19:06:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.479 19:06:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.479 ************************************ 00:05:30.479 END TEST event_reactor_perf 00:05:30.479 ************************************ 00:05:30.479 19:06:53 event -- event/event.sh@49 -- # uname -s 00:05:30.479 19:06:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.479 19:06:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.479 19:06:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.479 19:06:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.479 19:06:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.479 ************************************ 00:05:30.479 START TEST event_scheduler 00:05:30.479 ************************************ 00:05:30.479 19:06:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.738 * Looking for test storage... 00:05:30.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:30.738 19:06:53 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.738 19:06:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.738 19:06:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.738 19:06:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:30.738 19:06:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.739 19:06:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:30.739 19:06:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.739 19:06:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.739 19:06:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.739 19:06:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.739 --rc genhtml_branch_coverage=1 00:05:30.739 --rc genhtml_function_coverage=1 00:05:30.739 --rc genhtml_legend=1 00:05:30.739 --rc geninfo_all_blocks=1 00:05:30.739 --rc geninfo_unexecuted_blocks=1 00:05:30.739 00:05:30.739 ' 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.739 --rc genhtml_branch_coverage=1 00:05:30.739 --rc genhtml_function_coverage=1 00:05:30.739 --rc genhtml_legend=1 00:05:30.739 --rc geninfo_all_blocks=1 00:05:30.739 --rc geninfo_unexecuted_blocks=1 00:05:30.739 00:05:30.739 ' 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.739 --rc genhtml_branch_coverage=1 00:05:30.739 --rc genhtml_function_coverage=1 00:05:30.739 --rc genhtml_legend=1 00:05:30.739 --rc geninfo_all_blocks=1 00:05:30.739 --rc geninfo_unexecuted_blocks=1 00:05:30.739 00:05:30.739 ' 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.739 --rc genhtml_branch_coverage=1 00:05:30.739 --rc genhtml_function_coverage=1 00:05:30.739 --rc genhtml_legend=1 00:05:30.739 --rc geninfo_all_blocks=1 00:05:30.739 --rc geninfo_unexecuted_blocks=1 00:05:30.739 00:05:30.739 ' 00:05:30.739 19:06:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.739 19:06:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3554887 00:05:30.739 19:06:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.739 19:06:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.739 19:06:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3554887 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3554887 ']' 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.739 19:06:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.739 [2024-11-26 19:06:53.761736] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:30.739 [2024-11-26 19:06:53.761784] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554887 ] 00:05:30.739 [2024-11-26 19:06:53.835912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.999 [2024-11-26 19:06:53.881948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.999 [2024-11-26 19:06:53.882056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.999 [2024-11-26 19:06:53.882162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.999 [2024-11-26 19:06:53.882162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:30.999 19:06:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 [2024-11-26 19:06:53.910603] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:30.999 [2024-11-26 19:06:53.910618] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.999 [2024-11-26 19:06:53.910627] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.999 [2024-11-26 19:06:53.910632] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.999 [2024-11-26 19:06:53.910638] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 [2024-11-26 19:06:53.985281] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.999 19:06:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 ************************************ 00:05:30.999 START TEST scheduler_create_thread 00:05:30.999 ************************************ 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 2 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 3 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 4 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 5 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 6 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 7 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 8 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 9 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.999 10 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.999 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.258 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.258 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.258 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.258 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.258 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.518 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.518 19:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.518 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.518 19:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.425 19:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.425 19:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.425 19:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.425 19:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.425 19:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.362 19:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.362 00:05:34.362 real 0m3.101s 00:05:34.362 user 0m0.020s 00:05:34.362 sys 0m0.009s 00:05:34.362 19:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.362 19:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.362 ************************************ 00:05:34.362 END TEST scheduler_create_thread 00:05:34.362 ************************************ 00:05:34.362 19:06:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.362 19:06:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3554887 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3554887 ']' 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3554887 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3554887 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3554887' 00:05:34.362 killing process with pid 3554887 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3554887 00:05:34.362 19:06:57 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3554887 00:05:34.621 [2024-11-26 19:06:57.500388] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.621 00:05:34.621 real 0m4.148s 00:05:34.621 user 0m6.587s 00:05:34.621 sys 0m0.368s 00:05:34.621 19:06:57 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.621 19:06:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.621 ************************************ 00:05:34.621 END TEST event_scheduler 00:05:34.621 ************************************ 00:05:34.621 19:06:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.621 19:06:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.621 19:06:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.621 19:06:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.621 19:06:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.884 ************************************ 00:05:34.884 START TEST app_repeat 00:05:34.884 ************************************ 00:05:34.884 19:06:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3555629 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3555629' 00:05:34.884 Process app_repeat pid: 3555629 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.884 spdk_app_start Round 0 00:05:34.884 19:06:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3555629 /var/tmp/spdk-nbd.sock 00:05:34.884 19:06:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3555629 ']' 00:05:34.884 19:06:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.884 19:06:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.884 19:06:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.884 19:06:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.884 19:06:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.884 [2024-11-26 19:06:57.796130] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:34.884 [2024-11-26 19:06:57.796191] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555629 ] 00:05:34.884 [2024-11-26 19:06:57.872400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.884 [2024-11-26 19:06:57.917688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.884 [2024-11-26 19:06:57.917691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.190 19:06:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.190 19:06:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.191 19:06:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.191 Malloc0 00:05:35.191 19:06:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.482 Malloc1 00:05:35.482 19:06:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.482 19:06:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.779 /dev/nbd0 00:05:35.779 19:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.779 19:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.779 19:06:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:35.779 19:06:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.779 19:06:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.779 19:06:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.779 19:06:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:35.779 19:06:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.780 19:06:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.780 19:06:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.780 19:06:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.780 1+0 records in 00:05:35.780 1+0 records out 00:05:35.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226903 s, 18.1 MB/s 00:05:35.780 19:06:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.780 19:06:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.780 19:06:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.780 19:06:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.780 19:06:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.780 19:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.780 19:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.780 19:06:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.780 /dev/nbd1 00:05:35.780 19:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.061 19:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.061 1+0 records in 00:05:36.061 1+0 records out 00:05:36.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191422 s, 21.4 MB/s 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.061 19:06:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.061 19:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.061 19:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.061 19:06:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.061 19:06:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.061 19:06:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.061 19:06:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.062 { 00:05:36.062 "nbd_device": "/dev/nbd0", 00:05:36.062 "bdev_name": "Malloc0" 00:05:36.062 }, 00:05:36.062 { 00:05:36.062 "nbd_device": "/dev/nbd1", 00:05:36.062 "bdev_name": "Malloc1" 00:05:36.062 } 00:05:36.062 ]' 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.062 { 00:05:36.062 "nbd_device": "/dev/nbd0", 00:05:36.062 "bdev_name": "Malloc0" 00:05:36.062 }, 00:05:36.062 { 00:05:36.062 "nbd_device": "/dev/nbd1", 00:05:36.062 "bdev_name": "Malloc1" 00:05:36.062 } 00:05:36.062 ]' 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.062 /dev/nbd1' 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.062 /dev/nbd1' 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.062 256+0 records in 00:05:36.062 256+0 records out 00:05:36.062 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106588 s, 98.4 MB/s 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.062 19:06:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.430 256+0 records in 00:05:36.430 256+0 records out 00:05:36.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150494 s, 69.7 MB/s 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.431 256+0 records in 00:05:36.431 256+0 records out 00:05:36.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146774 s, 71.4 MB/s 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.431 19:06:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.432 19:06:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.432 19:06:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.432 19:06:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.432 19:06:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.698 19:06:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.958 19:06:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.959 19:06:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.959 19:06:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.959 19:06:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.959 19:06:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.218 19:07:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.218 [2024-11-26 19:07:00.250807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.218 [2024-11-26 19:07:00.288717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.218 [2024-11-26 19:07:00.288719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.218 [2024-11-26 19:07:00.329656] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.218 [2024-11-26 19:07:00.329704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.507 19:07:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.507 19:07:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.507 spdk_app_start Round 1 00:05:40.507 19:07:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3555629 /var/tmp/spdk-nbd.sock 00:05:40.507 19:07:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3555629 ']' 00:05:40.507 19:07:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.507 19:07:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.507 19:07:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.507 19:07:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.507 19:07:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.507 19:07:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.507 19:07:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.507 19:07:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.507 Malloc0 00:05:40.507 19:07:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.766 Malloc1 00:05:40.766 19:07:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.766 19:07:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.025 /dev/nbd0 00:05:41.025 19:07:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.025 19:07:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.025 1+0 records in 00:05:41.025 1+0 records out 00:05:41.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236389 s, 17.3 MB/s 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.025 19:07:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.025 19:07:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.025 19:07:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.025 19:07:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.025 /dev/nbd1 00:05:41.284 19:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.284 19:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.284 1+0 records in 00:05:41.284 1+0 records out 00:05:41.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253063 s, 16.2 MB/s 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.284 19:07:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.285 19:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.285 19:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.285 19:07:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.285 19:07:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.285 19:07:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.285 19:07:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.285 { 00:05:41.285 "nbd_device": "/dev/nbd0", 00:05:41.285 "bdev_name": "Malloc0" 00:05:41.285 }, 00:05:41.285 { 00:05:41.285 "nbd_device": "/dev/nbd1", 00:05:41.285 "bdev_name": "Malloc1" 00:05:41.285 } 00:05:41.285 ]' 00:05:41.285 19:07:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.285 19:07:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.285 { 00:05:41.285 "nbd_device": "/dev/nbd0", 00:05:41.285 "bdev_name": "Malloc0" 00:05:41.285 }, 00:05:41.285 { 00:05:41.285 "nbd_device": "/dev/nbd1", 00:05:41.285 "bdev_name": "Malloc1" 00:05:41.285 } 00:05:41.285 ]' 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.544 /dev/nbd1' 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.544 /dev/nbd1' 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.544 256+0 records in 00:05:41.544 256+0 records out 00:05:41.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00995529 s, 105 MB/s 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.544 256+0 records in 00:05:41.544 256+0 records out 00:05:41.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142994 s, 73.3 MB/s 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.544 256+0 records in 00:05:41.544 256+0 records out 00:05:41.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150003 s, 69.9 MB/s 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.544 19:07:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.803 19:07:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.062 19:07:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.062 19:07:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.062 19:07:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.321 19:07:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.580 [2024-11-26 19:07:05.530067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.580 [2024-11-26 19:07:05.567208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.580 [2024-11-26 19:07:05.567208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.580 [2024-11-26 19:07:05.608648] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.580 [2024-11-26 19:07:05.608698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.867 19:07:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.867 19:07:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.867 spdk_app_start Round 2 00:05:45.867 19:07:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3555629 /var/tmp/spdk-nbd.sock 00:05:45.867 19:07:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3555629 ']' 00:05:45.867 19:07:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.867 19:07:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.867 19:07:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.867 19:07:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.867 19:07:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.867 19:07:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.867 19:07:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.867 19:07:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.867 Malloc0 00:05:45.867 19:07:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.867 Malloc1 00:05:46.126 19:07:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.126 19:07:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.126 19:07:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.126 /dev/nbd0 00:05:46.126 19:07:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.126 19:07:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.126 19:07:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.384 1+0 records in 00:05:46.384 1+0 records out 00:05:46.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180625 s, 22.7 MB/s 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.384 /dev/nbd1 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.384 1+0 records in 00:05:46.384 1+0 records out 00:05:46.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224837 s, 18.2 MB/s 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.384 19:07:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.384 19:07:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.643 { 00:05:46.643 "nbd_device": "/dev/nbd0", 00:05:46.643 "bdev_name": "Malloc0" 00:05:46.643 }, 00:05:46.643 { 00:05:46.643 "nbd_device": "/dev/nbd1", 00:05:46.643 "bdev_name": "Malloc1" 00:05:46.643 } 00:05:46.643 ]' 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.643 { 00:05:46.643 "nbd_device": "/dev/nbd0", 00:05:46.643 "bdev_name": "Malloc0" 00:05:46.643 }, 00:05:46.643 { 00:05:46.643 "nbd_device": "/dev/nbd1", 00:05:46.643 "bdev_name": "Malloc1" 00:05:46.643 } 00:05:46.643 ]' 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.643 /dev/nbd1' 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.643 /dev/nbd1' 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.643 256+0 records in 00:05:46.643 256+0 records out 00:05:46.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100417 s, 104 MB/s 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.643 19:07:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.901 256+0 records in 00:05:46.901 256+0 records out 00:05:46.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132707 s, 79.0 MB/s 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.901 256+0 records in 00:05:46.901 256+0 records out 00:05:46.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149403 s, 70.2 MB/s 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.901 19:07:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.902 19:07:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.902 19:07:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.902 19:07:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.902 19:07:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.902 19:07:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.902 19:07:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.902 19:07:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.161 19:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.420 19:07:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.420 19:07:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.679 19:07:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.938 [2024-11-26 19:07:10.825814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.938 [2024-11-26 19:07:10.862948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.938 [2024-11-26 19:07:10.862949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.938 [2024-11-26 19:07:10.903761] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.938 [2024-11-26 19:07:10.903804] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.224 19:07:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3555629 /var/tmp/spdk-nbd.sock 00:05:51.224 19:07:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3555629 ']' 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:51.225 19:07:13 event.app_repeat -- event/event.sh@39 -- # killprocess 3555629 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3555629 ']' 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3555629 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3555629 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3555629' 00:05:51.225 killing process with pid 3555629 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3555629 00:05:51.225 19:07:13 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3555629 00:05:51.225 spdk_app_start is called in Round 0. 00:05:51.225 Shutdown signal received, stop current app iteration 00:05:51.225 Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 reinitialization... 00:05:51.225 spdk_app_start is called in Round 1. 00:05:51.225 Shutdown signal received, stop current app iteration 00:05:51.225 Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 reinitialization... 00:05:51.225 spdk_app_start is called in Round 2. 00:05:51.225 Shutdown signal received, stop current app iteration 00:05:51.225 Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 reinitialization... 00:05:51.225 spdk_app_start is called in Round 3. 00:05:51.225 Shutdown signal received, stop current app iteration 00:05:51.225 19:07:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.225 19:07:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.225 00:05:51.225 real 0m16.325s 00:05:51.225 user 0m35.840s 00:05:51.225 sys 0m2.545s 00:05:51.225 19:07:14 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.225 19:07:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 ************************************ 00:05:51.225 END TEST app_repeat 00:05:51.225 ************************************ 00:05:51.225 19:07:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.225 19:07:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.225 19:07:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.225 19:07:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.225 19:07:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 ************************************ 00:05:51.225 START TEST cpu_locks 00:05:51.225 ************************************ 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.225 * Looking for test storage... 00:05:51.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.225 19:07:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.225 --rc genhtml_branch_coverage=1 00:05:51.225 --rc genhtml_function_coverage=1 00:05:51.225 --rc genhtml_legend=1 00:05:51.225 --rc geninfo_all_blocks=1 00:05:51.225 --rc geninfo_unexecuted_blocks=1 00:05:51.225 00:05:51.225 ' 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.225 --rc genhtml_branch_coverage=1 00:05:51.225 --rc genhtml_function_coverage=1 00:05:51.225 --rc genhtml_legend=1 00:05:51.225 --rc geninfo_all_blocks=1 00:05:51.225 --rc geninfo_unexecuted_blocks=1 00:05:51.225 00:05:51.225 ' 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.225 --rc genhtml_branch_coverage=1 00:05:51.225 --rc genhtml_function_coverage=1 00:05:51.225 --rc genhtml_legend=1 00:05:51.225 --rc geninfo_all_blocks=1 00:05:51.225 --rc geninfo_unexecuted_blocks=1 00:05:51.225 00:05:51.225 ' 00:05:51.225 19:07:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.225 --rc genhtml_branch_coverage=1 00:05:51.225 --rc genhtml_function_coverage=1 00:05:51.225 --rc genhtml_legend=1 00:05:51.225 --rc geninfo_all_blocks=1 00:05:51.225 --rc geninfo_unexecuted_blocks=1 00:05:51.225 00:05:51.225 ' 00:05:51.225 19:07:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.225 19:07:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.225 19:07:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.225 19:07:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.484 19:07:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.484 19:07:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.484 19:07:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.484 ************************************ 00:05:51.484 START TEST default_locks 00:05:51.484 ************************************ 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3558635 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3558635 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3558635 ']' 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.484 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.484 [2024-11-26 19:07:14.424712] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:51.484 [2024-11-26 19:07:14.424754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3558635 ] 00:05:51.484 [2024-11-26 19:07:14.497489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.484 [2024-11-26 19:07:14.539121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.742 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.742 19:07:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:51.742 19:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3558635 00:05:51.742 19:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3558635 00:05:51.742 19:07:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.307 lslocks: write error 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3558635 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3558635 ']' 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3558635 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3558635 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3558635' 00:05:52.307 killing process with pid 3558635 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3558635 00:05:52.307 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3558635 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3558635 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3558635 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3558635 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3558635 ']' 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3558635) - No such process 00:05:52.566 ERROR: process (pid: 3558635) is no longer running 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:52.566 19:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.567 19:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.567 19:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.567 00:05:52.567 real 0m1.148s 00:05:52.567 user 0m1.098s 00:05:52.567 sys 0m0.524s 00:05:52.567 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.567 19:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.567 ************************************ 00:05:52.567 END TEST default_locks 00:05:52.567 ************************************ 00:05:52.567 19:07:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:52.567 19:07:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.567 19:07:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.567 19:07:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.567 ************************************ 00:05:52.567 START TEST default_locks_via_rpc 00:05:52.567 ************************************ 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3558893 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3558893 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3558893 ']' 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.567 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.567 [2024-11-26 19:07:15.641441] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:52.567 [2024-11-26 19:07:15.641483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3558893 ] 00:05:52.825 [2024-11-26 19:07:15.716535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.825 [2024-11-26 19:07:15.758259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3558893 00:05:53.083 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3558893 00:05:53.084 19:07:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.342 19:07:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3558893 00:05:53.342 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3558893 ']' 00:05:53.342 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3558893 00:05:53.342 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:53.600 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.600 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3558893 00:05:53.600 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.600 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.600 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3558893' 00:05:53.600 killing process with pid 3558893 00:05:53.600 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3558893 00:05:53.600 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3558893 00:05:53.920 00:05:53.920 real 0m1.214s 00:05:53.920 user 0m1.171s 00:05:53.920 sys 0m0.555s 00:05:53.920 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.920 19:07:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.920 ************************************ 00:05:53.920 END TEST default_locks_via_rpc 00:05:53.920 ************************************ 00:05:53.920 19:07:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:53.920 19:07:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.920 19:07:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.920 19:07:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.920 ************************************ 00:05:53.920 START TEST non_locking_app_on_locked_coremask 00:05:53.920 ************************************ 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3559148 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3559148 /var/tmp/spdk.sock 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3559148 ']' 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.920 19:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.920 [2024-11-26 19:07:16.925129] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:53.920 [2024-11-26 19:07:16.925172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559148 ] 00:05:53.920 [2024-11-26 19:07:17.000282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.178 [2024-11-26 19:07:17.042758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.178 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.178 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.178 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3559159 00:05:54.178 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3559159 /var/tmp/spdk2.sock 00:05:54.178 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:54.178 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3559159 ']' 00:05:54.178 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.179 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.179 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.179 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.179 19:07:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.437 [2024-11-26 19:07:17.296334] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:54.437 [2024-11-26 19:07:17.296381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559159 ] 00:05:54.437 [2024-11-26 19:07:17.381089] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.437 [2024-11-26 19:07:17.381110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.437 [2024-11-26 19:07:17.461996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.372 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.372 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.372 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3559148 00:05:55.372 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.372 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3559148 00:05:55.631 lslocks: write error 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3559148 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3559148 ']' 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3559148 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559148 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559148' 00:05:55.631 killing process with pid 3559148 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3559148 00:05:55.631 19:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3559148 00:05:56.200 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3559159 00:05:56.200 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3559159 ']' 00:05:56.200 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3559159 00:05:56.200 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.200 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.200 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559159 00:05:56.458 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.458 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.458 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559159' 00:05:56.458 killing process with pid 3559159 00:05:56.458 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3559159 00:05:56.458 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3559159 00:05:56.716 00:05:56.716 real 0m2.737s 00:05:56.717 user 0m2.900s 00:05:56.717 sys 0m0.902s 00:05:56.717 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.717 19:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.717 ************************************ 00:05:56.717 END TEST non_locking_app_on_locked_coremask 00:05:56.717 ************************************ 00:05:56.717 19:07:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:56.717 19:07:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.717 19:07:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.717 19:07:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.717 ************************************ 00:05:56.717 START TEST locking_app_on_unlocked_coremask 00:05:56.717 ************************************ 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3559645 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3559645 /var/tmp/spdk.sock 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3559645 ']' 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.717 19:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.717 [2024-11-26 19:07:19.736354] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:56.717 [2024-11-26 19:07:19.736400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559645 ] 00:05:56.717 [2024-11-26 19:07:19.810471] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.717 [2024-11-26 19:07:19.810496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.977 [2024-11-26 19:07:19.851763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3559702 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3559702 /var/tmp/spdk2.sock 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3559702 ']' 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.544 19:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.544 [2024-11-26 19:07:20.617662] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:05:57.544 [2024-11-26 19:07:20.617720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559702 ] 00:05:57.803 [2024-11-26 19:07:20.710220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.803 [2024-11-26 19:07:20.790971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.370 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.370 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.370 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3559702 00:05:58.370 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3559702 00:05:58.370 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.940 lslocks: write error 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3559645 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3559645 ']' 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3559645 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559645 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559645' 00:05:58.940 killing process with pid 3559645 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3559645 00:05:58.940 19:07:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3559645 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3559702 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3559702 ']' 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3559702 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559702 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559702' 00:05:59.509 killing process with pid 3559702 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3559702 00:05:59.509 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3559702 00:06:00.080 00:06:00.080 real 0m3.218s 00:06:00.080 user 0m3.545s 00:06:00.080 sys 0m0.927s 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.080 ************************************ 00:06:00.080 END TEST locking_app_on_unlocked_coremask 00:06:00.080 ************************************ 00:06:00.080 19:07:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:00.080 19:07:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.080 19:07:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.080 19:07:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.080 ************************************ 00:06:00.080 START TEST locking_app_on_locked_coremask 00:06:00.080 ************************************ 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3560152 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3560152 /var/tmp/spdk.sock 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3560152 ']' 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.080 19:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.080 [2024-11-26 19:07:23.011248] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:00.080 [2024-11-26 19:07:23.011284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560152 ] 00:06:00.080 [2024-11-26 19:07:23.084898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.080 [2024-11-26 19:07:23.125161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3560231 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3560231 /var/tmp/spdk2.sock 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3560231 /var/tmp/spdk2.sock 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3560231 /var/tmp/spdk2.sock 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3560231 ']' 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.340 19:07:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.340 [2024-11-26 19:07:23.406064] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:00.340 [2024-11-26 19:07:23.406112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560231 ] 00:06:00.600 [2024-11-26 19:07:23.498532] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3560152 has claimed it. 00:06:00.600 [2024-11-26 19:07:23.498569] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3560231) - No such process 00:06:01.167 ERROR: process (pid: 3560231) is no longer running 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3560152 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3560152 00:06:01.167 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.736 lslocks: write error 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3560152 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3560152 ']' 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3560152 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3560152 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3560152' 00:06:01.736 killing process with pid 3560152 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3560152 00:06:01.736 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3560152 00:06:01.996 00:06:01.996 real 0m1.964s 00:06:01.996 user 0m2.113s 00:06:01.996 sys 0m0.652s 00:06:01.996 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.996 19:07:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.996 ************************************ 00:06:01.996 END TEST locking_app_on_locked_coremask 00:06:01.996 ************************************ 00:06:01.996 19:07:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:01.996 19:07:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.996 19:07:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.996 19:07:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.996 ************************************ 00:06:01.996 START TEST locking_overlapped_coremask 00:06:01.996 ************************************ 00:06:01.996 19:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3560638 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3560638 /var/tmp/spdk.sock 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3560638 ']' 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.996 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.996 [2024-11-26 19:07:25.055403] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:01.996 [2024-11-26 19:07:25.055441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560638 ] 00:06:02.254 [2024-11-26 19:07:25.128600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.254 [2024-11-26 19:07:25.171322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.254 [2024-11-26 19:07:25.171347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.254 [2024-11-26 19:07:25.171347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3560659 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3560659 /var/tmp/spdk2.sock 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3560659 /var/tmp/spdk2.sock 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3560659 /var/tmp/spdk2.sock 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3560659 ']' 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.821 19:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.821 [2024-11-26 19:07:25.926878] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:02.821 [2024-11-26 19:07:25.926925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560659 ] 00:06:03.080 [2024-11-26 19:07:26.017368] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3560638 has claimed it. 00:06:03.080 [2024-11-26 19:07:26.017403] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3560659) - No such process 00:06:03.648 ERROR: process (pid: 3560659) is no longer running 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3560638 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3560638 ']' 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3560638 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3560638 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3560638' 00:06:03.648 killing process with pid 3560638 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3560638 00:06:03.648 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3560638 00:06:03.908 00:06:03.908 real 0m1.920s 00:06:03.908 user 0m5.554s 00:06:03.908 sys 0m0.412s 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.908 ************************************ 00:06:03.908 END TEST locking_overlapped_coremask 00:06:03.908 ************************************ 00:06:03.908 19:07:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:03.908 19:07:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.908 19:07:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.908 19:07:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.908 ************************************ 00:06:03.908 START TEST locking_overlapped_coremask_via_rpc 00:06:03.908 ************************************ 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3560917 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3560917 /var/tmp/spdk.sock 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3560917 ']' 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.908 19:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.166 [2024-11-26 19:07:27.044171] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:04.166 [2024-11-26 19:07:27.044215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560917 ] 00:06:04.166 [2024-11-26 19:07:27.116538] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.166 [2024-11-26 19:07:27.116563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.166 [2024-11-26 19:07:27.161064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.166 [2024-11-26 19:07:27.161175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.166 [2024-11-26 19:07:27.161175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3560935 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3560935 /var/tmp/spdk2.sock 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3560935 ']' 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.425 19:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.425 [2024-11-26 19:07:27.432430] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:04.425 [2024-11-26 19:07:27.432472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560935 ] 00:06:04.425 [2024-11-26 19:07:27.528729] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.425 [2024-11-26 19:07:27.528756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.684 [2024-11-26 19:07:27.619061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.684 [2024-11-26 19:07:27.622722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.684 [2024-11-26 19:07:27.622723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.251 [2024-11-26 19:07:28.287744] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3560917 has claimed it. 00:06:05.251 request: 00:06:05.251 { 00:06:05.251 "method": "framework_enable_cpumask_locks", 00:06:05.251 "req_id": 1 00:06:05.251 } 00:06:05.251 Got JSON-RPC error response 00:06:05.251 response: 00:06:05.251 { 00:06:05.251 "code": -32603, 00:06:05.251 "message": "Failed to claim CPU core: 2" 00:06:05.251 } 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3560917 /var/tmp/spdk.sock 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3560917 ']' 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.251 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3560935 /var/tmp/spdk2.sock 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3560935 ']' 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.509 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.768 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.768 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.768 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.768 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.768 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.768 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.768 00:06:05.768 real 0m1.716s 00:06:05.768 user 0m0.818s 00:06:05.768 sys 0m0.144s 00:06:05.768 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.768 19:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.768 ************************************ 00:06:05.768 END TEST locking_overlapped_coremask_via_rpc 00:06:05.768 ************************************ 00:06:05.768 19:07:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.768 19:07:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3560917 ]] 00:06:05.768 19:07:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3560917 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3560917 ']' 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3560917 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3560917 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3560917' 00:06:05.768 killing process with pid 3560917 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3560917 00:06:05.768 19:07:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3560917 00:06:06.026 19:07:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3560935 ]] 00:06:06.026 19:07:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3560935 00:06:06.026 19:07:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3560935 ']' 00:06:06.026 19:07:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3560935 00:06:06.026 19:07:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.026 19:07:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.026 19:07:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3560935 00:06:06.290 19:07:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:06.290 19:07:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:06.290 19:07:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3560935' 00:06:06.290 killing process with pid 3560935 00:06:06.290 19:07:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3560935 00:06:06.290 19:07:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3560935 00:06:06.549 19:07:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.549 19:07:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:06.549 19:07:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3560917 ]] 00:06:06.549 19:07:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3560917 00:06:06.549 19:07:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3560917 ']' 00:06:06.549 19:07:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3560917 00:06:06.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3560917) - No such process 00:06:06.549 19:07:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3560917 is not found' 00:06:06.549 Process with pid 3560917 is not found 00:06:06.549 19:07:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3560935 ]] 00:06:06.549 19:07:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3560935 00:06:06.549 19:07:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3560935 ']' 00:06:06.549 19:07:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3560935 00:06:06.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3560935) - No such process 00:06:06.549 19:07:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3560935 is not found' 00:06:06.549 Process with pid 3560935 is not found 00:06:06.549 19:07:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.549 00:06:06.549 real 0m15.308s 00:06:06.549 user 0m26.946s 00:06:06.549 sys 0m5.085s 00:06:06.549 19:07:29 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.549 19:07:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.549 ************************************ 00:06:06.549 END TEST cpu_locks 00:06:06.549 ************************************ 00:06:06.549 00:06:06.549 real 0m39.924s 00:06:06.549 user 1m15.919s 00:06:06.549 sys 0m8.624s 00:06:06.549 19:07:29 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.549 19:07:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.549 ************************************ 00:06:06.549 END TEST event 00:06:06.549 ************************************ 00:06:06.549 19:07:29 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.549 19:07:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.549 19:07:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.549 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.549 ************************************ 00:06:06.549 START TEST thread 00:06:06.549 ************************************ 00:06:06.549 19:07:29 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.549 * Looking for test storage... 00:06:06.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:06.549 19:07:29 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.809 19:07:29 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.809 19:07:29 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.809 19:07:29 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.809 19:07:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.809 19:07:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.809 19:07:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.809 19:07:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.809 19:07:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.809 19:07:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.809 19:07:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.809 19:07:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.809 19:07:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.809 19:07:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.809 19:07:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.809 19:07:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:06.809 19:07:29 thread -- scripts/common.sh@345 -- # : 1 00:06:06.809 19:07:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.809 19:07:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.809 19:07:29 thread -- scripts/common.sh@365 -- # decimal 1 00:06:06.809 19:07:29 thread -- scripts/common.sh@353 -- # local d=1 00:06:06.809 19:07:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.809 19:07:29 thread -- scripts/common.sh@355 -- # echo 1 00:06:06.809 19:07:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.809 19:07:29 thread -- scripts/common.sh@366 -- # decimal 2 00:06:06.809 19:07:29 thread -- scripts/common.sh@353 -- # local d=2 00:06:06.809 19:07:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.809 19:07:29 thread -- scripts/common.sh@355 -- # echo 2 00:06:06.809 19:07:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.809 19:07:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.809 19:07:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.809 19:07:29 thread -- scripts/common.sh@368 -- # return 0 00:06:06.809 19:07:29 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.809 19:07:29 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.809 --rc genhtml_branch_coverage=1 00:06:06.810 --rc genhtml_function_coverage=1 00:06:06.810 --rc genhtml_legend=1 00:06:06.810 --rc geninfo_all_blocks=1 00:06:06.810 --rc geninfo_unexecuted_blocks=1 00:06:06.810 00:06:06.810 ' 00:06:06.810 19:07:29 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.810 --rc genhtml_branch_coverage=1 00:06:06.810 --rc genhtml_function_coverage=1 00:06:06.810 --rc genhtml_legend=1 00:06:06.810 --rc geninfo_all_blocks=1 00:06:06.810 --rc geninfo_unexecuted_blocks=1 00:06:06.810 00:06:06.810 ' 00:06:06.810 19:07:29 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.810 --rc genhtml_branch_coverage=1 00:06:06.810 --rc genhtml_function_coverage=1 00:06:06.810 --rc genhtml_legend=1 00:06:06.810 --rc geninfo_all_blocks=1 00:06:06.810 --rc geninfo_unexecuted_blocks=1 00:06:06.810 00:06:06.810 ' 00:06:06.810 19:07:29 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.810 --rc genhtml_branch_coverage=1 00:06:06.810 --rc genhtml_function_coverage=1 00:06:06.810 --rc genhtml_legend=1 00:06:06.810 --rc geninfo_all_blocks=1 00:06:06.810 --rc geninfo_unexecuted_blocks=1 00:06:06.810 00:06:06.810 ' 00:06:06.810 19:07:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.810 19:07:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:06.810 19:07:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.810 19:07:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.810 ************************************ 00:06:06.810 START TEST thread_poller_perf 00:06:06.810 ************************************ 00:06:06.810 19:07:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.810 [2024-11-26 19:07:29.803401] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:06.810 [2024-11-26 19:07:29.803465] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3561490 ] 00:06:06.810 [2024-11-26 19:07:29.881539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.810 [2024-11-26 19:07:29.921368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.810 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:08.187 [2024-11-26T18:07:31.301Z] ====================================== 00:06:08.187 [2024-11-26T18:07:31.301Z] busy:2105325722 (cyc) 00:06:08.187 [2024-11-26T18:07:31.301Z] total_run_count: 409000 00:06:08.187 [2024-11-26T18:07:31.301Z] tsc_hz: 2100000000 (cyc) 00:06:08.187 [2024-11-26T18:07:31.301Z] ====================================== 00:06:08.187 [2024-11-26T18:07:31.301Z] poller_cost: 5147 (cyc), 2450 (nsec) 00:06:08.187 00:06:08.187 real 0m1.187s 00:06:08.187 user 0m1.111s 00:06:08.187 sys 0m0.071s 00:06:08.187 19:07:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.187 19:07:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.187 ************************************ 00:06:08.187 END TEST thread_poller_perf 00:06:08.187 ************************************ 00:06:08.187 19:07:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.187 19:07:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.187 19:07:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.187 19:07:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.187 ************************************ 00:06:08.187 START TEST thread_poller_perf 00:06:08.187 ************************************ 00:06:08.187 19:07:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.187 [2024-11-26 19:07:31.061090] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:08.187 [2024-11-26 19:07:31.061159] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3561736 ] 00:06:08.187 [2024-11-26 19:07:31.139806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.187 [2024-11-26 19:07:31.179424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.187 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:09.123 [2024-11-26T18:07:32.237Z] ====================================== 00:06:09.123 [2024-11-26T18:07:32.237Z] busy:2101500834 (cyc) 00:06:09.123 [2024-11-26T18:07:32.237Z] total_run_count: 5332000 00:06:09.123 [2024-11-26T18:07:32.237Z] tsc_hz: 2100000000 (cyc) 00:06:09.123 [2024-11-26T18:07:32.237Z] ====================================== 00:06:09.123 [2024-11-26T18:07:32.237Z] poller_cost: 394 (cyc), 187 (nsec) 00:06:09.123 00:06:09.123 real 0m1.177s 00:06:09.123 user 0m1.099s 00:06:09.123 sys 0m0.074s 00:06:09.123 19:07:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.123 19:07:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.123 ************************************ 00:06:09.123 END TEST thread_poller_perf 00:06:09.123 ************************************ 00:06:09.383 19:07:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:09.383 00:06:09.383 real 0m2.679s 00:06:09.383 user 0m2.380s 00:06:09.383 sys 0m0.314s 00:06:09.383 19:07:32 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.383 19:07:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.383 ************************************ 00:06:09.383 END TEST thread 00:06:09.383 ************************************ 00:06:09.383 19:07:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:09.383 19:07:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.383 19:07:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.383 19:07:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.383 19:07:32 -- common/autotest_common.sh@10 -- # set +x 00:06:09.383 ************************************ 00:06:09.383 START TEST app_cmdline 00:06:09.383 ************************************ 00:06:09.383 19:07:32 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.383 * Looking for test storage... 00:06:09.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:09.383 19:07:32 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.383 19:07:32 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.383 19:07:32 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.383 19:07:32 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.383 19:07:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:09.643 19:07:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.643 19:07:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.643 19:07:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.643 19:07:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.643 --rc genhtml_branch_coverage=1 00:06:09.643 --rc genhtml_function_coverage=1 00:06:09.643 --rc genhtml_legend=1 00:06:09.643 --rc geninfo_all_blocks=1 00:06:09.643 --rc geninfo_unexecuted_blocks=1 00:06:09.643 00:06:09.643 ' 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.643 --rc genhtml_branch_coverage=1 00:06:09.643 --rc genhtml_function_coverage=1 00:06:09.643 --rc genhtml_legend=1 00:06:09.643 --rc geninfo_all_blocks=1 00:06:09.643 --rc geninfo_unexecuted_blocks=1 00:06:09.643 00:06:09.643 ' 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.643 --rc genhtml_branch_coverage=1 00:06:09.643 --rc genhtml_function_coverage=1 00:06:09.643 --rc genhtml_legend=1 00:06:09.643 --rc geninfo_all_blocks=1 00:06:09.643 --rc geninfo_unexecuted_blocks=1 00:06:09.643 00:06:09.643 ' 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.643 --rc genhtml_branch_coverage=1 00:06:09.643 --rc genhtml_function_coverage=1 00:06:09.643 --rc genhtml_legend=1 00:06:09.643 --rc geninfo_all_blocks=1 00:06:09.643 --rc geninfo_unexecuted_blocks=1 00:06:09.643 00:06:09.643 ' 00:06:09.643 19:07:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:09.643 19:07:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:09.643 19:07:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3562037 00:06:09.643 19:07:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3562037 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3562037 ']' 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.643 19:07:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.643 [2024-11-26 19:07:32.540035] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:09.643 [2024-11-26 19:07:32.540092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562037 ] 00:06:09.643 [2024-11-26 19:07:32.614865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.643 [2024-11-26 19:07:32.654580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.902 19:07:32 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.902 19:07:32 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:09.902 19:07:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:10.161 { 00:06:10.161 "version": "SPDK v25.01-pre git sha1 b09de013a", 00:06:10.161 "fields": { 00:06:10.161 "major": 25, 00:06:10.161 "minor": 1, 00:06:10.161 "patch": 0, 00:06:10.161 "suffix": "-pre", 00:06:10.161 "commit": "b09de013a" 00:06:10.161 } 00:06:10.161 } 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.161 19:07:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:10.161 19:07:33 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.420 request: 00:06:10.420 { 00:06:10.420 "method": "env_dpdk_get_mem_stats", 00:06:10.420 "req_id": 1 00:06:10.420 } 00:06:10.420 Got JSON-RPC error response 00:06:10.420 response: 00:06:10.420 { 00:06:10.420 "code": -32601, 00:06:10.420 "message": "Method not found" 00:06:10.420 } 00:06:10.420 19:07:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:10.420 19:07:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.421 19:07:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3562037 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3562037 ']' 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3562037 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3562037 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3562037' 00:06:10.421 killing process with pid 3562037 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 3562037 00:06:10.421 19:07:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 3562037 00:06:10.680 00:06:10.680 real 0m1.347s 00:06:10.680 user 0m1.569s 00:06:10.680 sys 0m0.473s 00:06:10.680 19:07:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.680 19:07:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.680 ************************************ 00:06:10.680 END TEST app_cmdline 00:06:10.680 ************************************ 00:06:10.680 19:07:33 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:10.680 19:07:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.680 19:07:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.680 19:07:33 -- common/autotest_common.sh@10 -- # set +x 00:06:10.680 ************************************ 00:06:10.680 START TEST version 00:06:10.680 ************************************ 00:06:10.680 19:07:33 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:10.939 * Looking for test storage... 00:06:10.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:10.939 19:07:33 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.939 19:07:33 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.939 19:07:33 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.939 19:07:33 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.939 19:07:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.939 19:07:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.939 19:07:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.939 19:07:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.939 19:07:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.939 19:07:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.939 19:07:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.939 19:07:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.939 19:07:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.939 19:07:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.939 19:07:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.939 19:07:33 version -- scripts/common.sh@344 -- # case "$op" in 00:06:10.939 19:07:33 version -- scripts/common.sh@345 -- # : 1 00:06:10.939 19:07:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.939 19:07:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.939 19:07:33 version -- scripts/common.sh@365 -- # decimal 1 00:06:10.939 19:07:33 version -- scripts/common.sh@353 -- # local d=1 00:06:10.940 19:07:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.940 19:07:33 version -- scripts/common.sh@355 -- # echo 1 00:06:10.940 19:07:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.940 19:07:33 version -- scripts/common.sh@366 -- # decimal 2 00:06:10.940 19:07:33 version -- scripts/common.sh@353 -- # local d=2 00:06:10.940 19:07:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.940 19:07:33 version -- scripts/common.sh@355 -- # echo 2 00:06:10.940 19:07:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.940 19:07:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.940 19:07:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.940 19:07:33 version -- scripts/common.sh@368 -- # return 0 00:06:10.940 19:07:33 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.940 19:07:33 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.940 --rc genhtml_branch_coverage=1 00:06:10.940 --rc genhtml_function_coverage=1 00:06:10.940 --rc genhtml_legend=1 00:06:10.940 --rc geninfo_all_blocks=1 00:06:10.940 --rc geninfo_unexecuted_blocks=1 00:06:10.940 00:06:10.940 ' 00:06:10.940 19:07:33 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.940 --rc genhtml_branch_coverage=1 00:06:10.940 --rc genhtml_function_coverage=1 00:06:10.940 --rc genhtml_legend=1 00:06:10.940 --rc geninfo_all_blocks=1 00:06:10.940 --rc geninfo_unexecuted_blocks=1 00:06:10.940 00:06:10.940 ' 00:06:10.940 19:07:33 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.940 --rc genhtml_branch_coverage=1 00:06:10.940 --rc genhtml_function_coverage=1 00:06:10.940 --rc genhtml_legend=1 00:06:10.940 --rc geninfo_all_blocks=1 00:06:10.940 --rc geninfo_unexecuted_blocks=1 00:06:10.940 00:06:10.940 ' 00:06:10.940 19:07:33 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.940 --rc genhtml_branch_coverage=1 00:06:10.940 --rc genhtml_function_coverage=1 00:06:10.940 --rc genhtml_legend=1 00:06:10.940 --rc geninfo_all_blocks=1 00:06:10.940 --rc geninfo_unexecuted_blocks=1 00:06:10.940 00:06:10.940 ' 00:06:10.940 19:07:33 version -- app/version.sh@17 -- # get_header_version major 00:06:10.940 19:07:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:10.940 19:07:33 version -- app/version.sh@14 -- # cut -f2 00:06:10.940 19:07:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.940 19:07:33 version -- app/version.sh@17 -- # major=25 00:06:10.940 19:07:33 version -- app/version.sh@18 -- # get_header_version minor 00:06:10.940 19:07:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:10.940 19:07:33 version -- app/version.sh@14 -- # cut -f2 00:06:10.940 19:07:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.940 19:07:33 version -- app/version.sh@18 -- # minor=1 00:06:10.940 19:07:33 version -- app/version.sh@19 -- # get_header_version patch 00:06:10.940 19:07:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:10.940 19:07:33 version -- app/version.sh@14 -- # cut -f2 00:06:10.940 19:07:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.940 19:07:33 version -- app/version.sh@19 -- # patch=0 00:06:10.940 19:07:33 version -- app/version.sh@20 -- # get_header_version suffix 00:06:10.940 19:07:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:10.940 19:07:33 version -- app/version.sh@14 -- # cut -f2 00:06:10.940 19:07:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.940 19:07:33 version -- app/version.sh@20 -- # suffix=-pre 00:06:10.940 19:07:33 version -- app/version.sh@22 -- # version=25.1 00:06:10.940 19:07:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:10.940 19:07:33 version -- app/version.sh@28 -- # version=25.1rc0 00:06:10.940 19:07:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:10.940 19:07:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:10.940 19:07:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:10.940 19:07:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:10.940 00:06:10.940 real 0m0.243s 00:06:10.940 user 0m0.142s 00:06:10.940 sys 0m0.145s 00:06:10.940 19:07:33 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.940 19:07:33 version -- common/autotest_common.sh@10 -- # set +x 00:06:10.940 ************************************ 00:06:10.940 END TEST version 00:06:10.940 ************************************ 00:06:10.940 19:07:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:10.940 19:07:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:10.940 19:07:34 -- spdk/autotest.sh@194 -- # uname -s 00:06:10.940 19:07:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:10.940 19:07:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.940 19:07:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.940 19:07:34 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:10.940 19:07:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:10.940 19:07:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:10.940 19:07:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.940 19:07:34 -- common/autotest_common.sh@10 -- # set +x 00:06:11.199 19:07:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:11.199 19:07:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:11.199 19:07:34 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:11.199 19:07:34 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:11.199 19:07:34 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:11.199 19:07:34 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:11.199 19:07:34 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:11.199 19:07:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.199 19:07:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.199 19:07:34 -- common/autotest_common.sh@10 -- # set +x 00:06:11.199 ************************************ 00:06:11.199 START TEST nvmf_tcp 00:06:11.199 ************************************ 00:06:11.199 19:07:34 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:11.199 * Looking for test storage... 00:06:11.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:11.199 19:07:34 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.199 19:07:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.199 19:07:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.199 19:07:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.199 19:07:34 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:11.200 19:07:34 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.200 19:07:34 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.200 --rc genhtml_branch_coverage=1 00:06:11.200 --rc genhtml_function_coverage=1 00:06:11.200 --rc genhtml_legend=1 00:06:11.200 --rc geninfo_all_blocks=1 00:06:11.200 --rc geninfo_unexecuted_blocks=1 00:06:11.200 00:06:11.200 ' 00:06:11.200 19:07:34 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.200 --rc genhtml_branch_coverage=1 00:06:11.200 --rc genhtml_function_coverage=1 00:06:11.200 --rc genhtml_legend=1 00:06:11.200 --rc geninfo_all_blocks=1 00:06:11.200 --rc geninfo_unexecuted_blocks=1 00:06:11.200 00:06:11.200 ' 00:06:11.200 19:07:34 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.200 --rc genhtml_branch_coverage=1 00:06:11.200 --rc genhtml_function_coverage=1 00:06:11.200 --rc genhtml_legend=1 00:06:11.200 --rc geninfo_all_blocks=1 00:06:11.200 --rc geninfo_unexecuted_blocks=1 00:06:11.200 00:06:11.200 ' 00:06:11.200 19:07:34 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.200 --rc genhtml_branch_coverage=1 00:06:11.200 --rc genhtml_function_coverage=1 00:06:11.200 --rc genhtml_legend=1 00:06:11.200 --rc geninfo_all_blocks=1 00:06:11.200 --rc geninfo_unexecuted_blocks=1 00:06:11.200 00:06:11.200 ' 00:06:11.200 19:07:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:11.200 19:07:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:11.200 19:07:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:11.200 19:07:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.200 19:07:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.200 19:07:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.459 ************************************ 00:06:11.459 START TEST nvmf_target_core 00:06:11.459 ************************************ 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:11.459 * Looking for test storage... 00:06:11.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.459 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.460 --rc genhtml_branch_coverage=1 00:06:11.460 --rc genhtml_function_coverage=1 00:06:11.460 --rc genhtml_legend=1 00:06:11.460 --rc geninfo_all_blocks=1 00:06:11.460 --rc geninfo_unexecuted_blocks=1 00:06:11.460 00:06:11.460 ' 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.460 --rc genhtml_branch_coverage=1 00:06:11.460 --rc genhtml_function_coverage=1 00:06:11.460 --rc genhtml_legend=1 00:06:11.460 --rc geninfo_all_blocks=1 00:06:11.460 --rc geninfo_unexecuted_blocks=1 00:06:11.460 00:06:11.460 ' 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.460 --rc genhtml_branch_coverage=1 00:06:11.460 --rc genhtml_function_coverage=1 00:06:11.460 --rc genhtml_legend=1 00:06:11.460 --rc geninfo_all_blocks=1 00:06:11.460 --rc geninfo_unexecuted_blocks=1 00:06:11.460 00:06:11.460 ' 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.460 --rc genhtml_branch_coverage=1 00:06:11.460 --rc genhtml_function_coverage=1 00:06:11.460 --rc genhtml_legend=1 00:06:11.460 --rc geninfo_all_blocks=1 00:06:11.460 --rc geninfo_unexecuted_blocks=1 00:06:11.460 00:06:11.460 ' 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.460 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:11.461 ************************************ 00:06:11.461 START TEST nvmf_abort 00:06:11.461 ************************************ 00:06:11.461 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:11.721 * Looking for test storage... 00:06:11.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.721 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.722 --rc genhtml_branch_coverage=1 00:06:11.722 --rc genhtml_function_coverage=1 00:06:11.722 --rc genhtml_legend=1 00:06:11.722 --rc geninfo_all_blocks=1 00:06:11.722 --rc geninfo_unexecuted_blocks=1 00:06:11.722 00:06:11.722 ' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.722 --rc genhtml_branch_coverage=1 00:06:11.722 --rc genhtml_function_coverage=1 00:06:11.722 --rc genhtml_legend=1 00:06:11.722 --rc geninfo_all_blocks=1 00:06:11.722 --rc geninfo_unexecuted_blocks=1 00:06:11.722 00:06:11.722 ' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.722 --rc genhtml_branch_coverage=1 00:06:11.722 --rc genhtml_function_coverage=1 00:06:11.722 --rc genhtml_legend=1 00:06:11.722 --rc geninfo_all_blocks=1 00:06:11.722 --rc geninfo_unexecuted_blocks=1 00:06:11.722 00:06:11.722 ' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.722 --rc genhtml_branch_coverage=1 00:06:11.722 --rc genhtml_function_coverage=1 00:06:11.722 --rc genhtml_legend=1 00:06:11.722 --rc geninfo_all_blocks=1 00:06:11.722 --rc geninfo_unexecuted_blocks=1 00:06:11.722 00:06:11.722 ' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:11.722 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:18.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.293 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:18.294 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:18.294 Found net devices under 0000:86:00.0: cvl_0_0 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:18.294 Found net devices under 0000:86:00.1: cvl_0_1 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:18.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:06:18.294 00:06:18.294 --- 10.0.0.2 ping statistics --- 00:06:18.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.294 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:06:18.294 00:06:18.294 --- 10.0.0.1 ping statistics --- 00:06:18.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.294 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3565714 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3565714 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3565714 ']' 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.294 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.294 [2024-11-26 19:07:40.852647] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:18.294 [2024-11-26 19:07:40.852693] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.294 [2024-11-26 19:07:40.931519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.294 [2024-11-26 19:07:40.972819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:18.294 [2024-11-26 19:07:40.972860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:18.294 [2024-11-26 19:07:40.972868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:18.294 [2024-11-26 19:07:40.972875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:18.294 [2024-11-26 19:07:40.972880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:18.294 [2024-11-26 19:07:40.974266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.294 [2024-11-26 19:07:40.974371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.294 [2024-11-26 19:07:40.974379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 [2024-11-26 19:07:41.744817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 Malloc0 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 Delay0 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 [2024-11-26 19:07:41.824567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.881 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:18.881 [2024-11-26 19:07:41.920492] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:21.409 Initializing NVMe Controllers 00:06:21.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:21.409 controller IO queue size 128 less than required 00:06:21.409 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:21.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:21.409 Initialization complete. Launching workers. 00:06:21.409 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37343 00:06:21.409 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37408, failed to submit 62 00:06:21.409 success 37347, unsuccessful 61, failed 0 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:21.409 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:21.410 rmmod nvme_tcp 00:06:21.410 rmmod nvme_fabrics 00:06:21.410 rmmod nvme_keyring 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3565714 ']' 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3565714 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3565714 ']' 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3565714 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3565714 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3565714' 00:06:21.410 killing process with pid 3565714 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3565714 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3565714 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.410 19:07:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:23.947 00:06:23.947 real 0m11.913s 00:06:23.947 user 0m13.906s 00:06:23.947 sys 0m5.437s 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.947 ************************************ 00:06:23.947 END TEST nvmf_abort 00:06:23.947 ************************************ 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.947 ************************************ 00:06:23.947 START TEST nvmf_ns_hotplug_stress 00:06:23.947 ************************************ 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:23.947 * Looking for test storage... 00:06:23.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.947 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.948 --rc genhtml_branch_coverage=1 00:06:23.948 --rc genhtml_function_coverage=1 00:06:23.948 --rc genhtml_legend=1 00:06:23.948 --rc geninfo_all_blocks=1 00:06:23.948 --rc geninfo_unexecuted_blocks=1 00:06:23.948 00:06:23.948 ' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.948 --rc genhtml_branch_coverage=1 00:06:23.948 --rc genhtml_function_coverage=1 00:06:23.948 --rc genhtml_legend=1 00:06:23.948 --rc geninfo_all_blocks=1 00:06:23.948 --rc geninfo_unexecuted_blocks=1 00:06:23.948 00:06:23.948 ' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.948 --rc genhtml_branch_coverage=1 00:06:23.948 --rc genhtml_function_coverage=1 00:06:23.948 --rc genhtml_legend=1 00:06:23.948 --rc geninfo_all_blocks=1 00:06:23.948 --rc geninfo_unexecuted_blocks=1 00:06:23.948 00:06:23.948 ' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.948 --rc genhtml_branch_coverage=1 00:06:23.948 --rc genhtml_function_coverage=1 00:06:23.948 --rc genhtml_legend=1 00:06:23.948 --rc geninfo_all_blocks=1 00:06:23.948 --rc geninfo_unexecuted_blocks=1 00:06:23.948 00:06:23.948 ' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.948 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:30.521 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:30.522 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:30.522 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:30.522 Found net devices under 0000:86:00.0: cvl_0_0 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:30.522 Found net devices under 0000:86:00.1: cvl_0_1 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:30.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:06:30.522 00:06:30.522 --- 10.0.0.2 ping statistics --- 00:06:30.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.522 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:06:30.522 00:06:30.522 --- 10.0.0.1 ping statistics --- 00:06:30.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.522 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3569897 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3569897 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3569897 ']' 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.522 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:30.522 [2024-11-26 19:07:52.847780] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:06:30.522 [2024-11-26 19:07:52.847834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.522 [2024-11-26 19:07:52.911190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.522 [2024-11-26 19:07:52.953640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.522 [2024-11-26 19:07:52.953689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.522 [2024-11-26 19:07:52.953696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.522 [2024-11-26 19:07:52.953718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.522 [2024-11-26 19:07:52.953723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.522 [2024-11-26 19:07:52.955124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.522 [2024-11-26 19:07:52.955230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.522 [2024-11-26 19:07:52.955231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:30.522 [2024-11-26 19:07:53.255798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:30.522 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.780 [2024-11-26 19:07:53.645169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.780 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:30.780 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:31.038 Malloc0 00:06:31.038 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:31.296 Delay0 00:06:31.296 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.553 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:31.811 NULL1 00:06:31.811 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:31.811 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3570232 00:06:31.811 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:31.811 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:31.811 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.183 Read completed with error (sct=0, sc=11) 00:06:33.183 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.183 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:33.183 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:33.439 true 00:06:33.440 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:33.440 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.372 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.372 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:34.372 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:34.630 true 00:06:34.630 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:34.630 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.888 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.146 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:35.146 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:35.146 true 00:06:35.146 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:35.146 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.520 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.520 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:36.520 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:36.778 true 00:06:36.778 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:36.778 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.715 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.715 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:37.715 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:37.975 true 00:06:37.975 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:37.975 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.232 19:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.232 19:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:38.232 19:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:38.490 true 00:06:38.490 19:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:38.490 19:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.984 19:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.984 19:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:39.984 19:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:39.984 true 00:06:39.984 19:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:39.984 19:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.919 19:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.178 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:41.178 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:41.178 true 00:06:41.178 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:41.178 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.437 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.695 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:41.695 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:41.695 true 00:06:41.953 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:41.953 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.886 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.144 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:43.144 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:43.402 true 00:06:43.402 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:43.403 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.337 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.337 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:44.337 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:44.595 true 00:06:44.595 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:44.595 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.854 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.854 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:44.854 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:45.112 true 00:06:45.112 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:45.112 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.484 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.484 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:46.484 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:46.741 true 00:06:46.742 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:46.742 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.675 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.675 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:47.675 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:47.932 true 00:06:47.932 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:47.932 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.190 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.190 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:48.190 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:48.449 true 00:06:48.449 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:48.449 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.823 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.823 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:49.823 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:49.823 true 00:06:49.823 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:49.823 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.756 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.013 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:51.013 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:51.013 true 00:06:51.013 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:51.013 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.271 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.529 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:51.529 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:51.787 true 00:06:51.787 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:51.787 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.720 19:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.979 19:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:52.979 19:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:53.237 true 00:06:53.237 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:53.237 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.171 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.171 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:54.171 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:54.429 true 00:06:54.429 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:54.429 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.687 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.687 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:54.687 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:54.945 true 00:06:54.945 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:54.946 19:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.879 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.138 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:56.138 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:56.394 true 00:06:56.394 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:56.394 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.327 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.327 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:57.327 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:57.585 true 00:06:57.585 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:57.585 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.843 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.100 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:58.100 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:58.100 true 00:06:58.358 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:58.358 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.293 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.551 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:59.551 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:59.551 true 00:06:59.551 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:06:59.551 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.810 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.068 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:00.068 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:00.326 true 00:07:00.326 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:07:00.326 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.260 19:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.518 19:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:01.518 19:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:01.776 true 00:07:01.776 19:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:07:01.776 19:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.709 Initializing NVMe Controllers 00:07:02.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.709 Controller IO queue size 128, less than required. 00:07:02.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.709 Controller IO queue size 128, less than required. 00:07:02.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:02.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:02.709 Initialization complete. Launching workers. 00:07:02.709 ======================================================== 00:07:02.709 Latency(us) 00:07:02.709 Device Information : IOPS MiB/s Average min max 00:07:02.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2081.60 1.02 44574.70 2300.35 1043667.94 00:07:02.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18474.63 9.02 6928.00 2143.27 447169.66 00:07:02.709 ======================================================== 00:07:02.709 Total : 20556.23 10.04 10740.25 2143.27 1043667.94 00:07:02.709 00:07:02.709 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.709 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:02.709 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:02.967 true 00:07:02.967 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3570232 00:07:02.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3570232) - No such process 00:07:02.967 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3570232 00:07:02.967 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.225 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.483 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:03.483 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:03.483 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:03.483 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.483 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:03.483 null0 00:07:03.741 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.741 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.741 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:03.741 null1 00:07:03.741 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.741 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.741 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:04.000 null2 00:07:04.000 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.000 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.000 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:04.258 null3 00:07:04.258 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.258 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.258 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:04.516 null4 00:07:04.516 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.516 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.516 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:04.516 null5 00:07:04.516 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.516 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.516 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:04.774 null6 00:07:04.775 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.775 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.775 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:05.033 null7 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.033 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.034 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.034 19:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:05.034 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.034 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3575847 3575849 3575850 3575852 3575854 3575856 3575858 3575860 00:07:05.034 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:05.034 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.034 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.034 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.292 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.550 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.551 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.809 19:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.066 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.066 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.066 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.066 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.066 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.066 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.067 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.067 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.324 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.582 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.840 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.840 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.840 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.840 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.840 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.840 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.840 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.840 19:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.097 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.098 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.098 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.098 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.098 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.098 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.098 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.098 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.354 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.355 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.355 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.355 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.355 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.355 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.355 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.355 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.612 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.870 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.870 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.870 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.870 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.870 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.870 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.870 19:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.126 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.126 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.127 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.127 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.127 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.127 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.127 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.127 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.384 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.384 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.384 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.384 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.384 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.384 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.385 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.642 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.643 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.643 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.643 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.643 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.643 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.643 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.643 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.900 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.900 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.900 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.900 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.900 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.900 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.900 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.900 19:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.158 rmmod nvme_tcp 00:07:09.158 rmmod nvme_fabrics 00:07:09.158 rmmod nvme_keyring 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3569897 ']' 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3569897 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3569897 ']' 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3569897 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.158 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3569897 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3569897' 00:07:09.418 killing process with pid 3569897 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3569897 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3569897 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.418 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.955 00:07:11.955 real 0m47.972s 00:07:11.955 user 3m14.068s 00:07:11.955 sys 0m15.486s 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.955 ************************************ 00:07:11.955 END TEST nvmf_ns_hotplug_stress 00:07:11.955 ************************************ 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.955 ************************************ 00:07:11.955 START TEST nvmf_delete_subsystem 00:07:11.955 ************************************ 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:11.955 * Looking for test storage... 00:07:11.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.955 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.956 --rc genhtml_branch_coverage=1 00:07:11.956 --rc genhtml_function_coverage=1 00:07:11.956 --rc genhtml_legend=1 00:07:11.956 --rc geninfo_all_blocks=1 00:07:11.956 --rc geninfo_unexecuted_blocks=1 00:07:11.956 00:07:11.956 ' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.956 --rc genhtml_branch_coverage=1 00:07:11.956 --rc genhtml_function_coverage=1 00:07:11.956 --rc genhtml_legend=1 00:07:11.956 --rc geninfo_all_blocks=1 00:07:11.956 --rc geninfo_unexecuted_blocks=1 00:07:11.956 00:07:11.956 ' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.956 --rc genhtml_branch_coverage=1 00:07:11.956 --rc genhtml_function_coverage=1 00:07:11.956 --rc genhtml_legend=1 00:07:11.956 --rc geninfo_all_blocks=1 00:07:11.956 --rc geninfo_unexecuted_blocks=1 00:07:11.956 00:07:11.956 ' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.956 --rc genhtml_branch_coverage=1 00:07:11.956 --rc genhtml_function_coverage=1 00:07:11.956 --rc genhtml_legend=1 00:07:11.956 --rc geninfo_all_blocks=1 00:07:11.956 --rc geninfo_unexecuted_blocks=1 00:07:11.956 00:07:11.956 ' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.956 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.957 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.957 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.957 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.957 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.957 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.957 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.957 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.957 19:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.525 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:18.526 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:18.526 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:18.526 Found net devices under 0000:86:00.0: cvl_0_0 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:18.526 Found net devices under 0000:86:00.1: cvl_0_1 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:07:18.526 00:07:18.526 --- 10.0.0.2 ping statistics --- 00:07:18.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.526 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:07:18.526 00:07:18.526 --- 10.0.0.1 ping statistics --- 00:07:18.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.526 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3580241 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3580241 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3580241 ']' 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.526 19:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.526 [2024-11-26 19:08:40.815457] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:07:18.527 [2024-11-26 19:08:40.815498] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.527 [2024-11-26 19:08:40.891354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.527 [2024-11-26 19:08:40.930659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.527 [2024-11-26 19:08:40.930702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.527 [2024-11-26 19:08:40.930709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.527 [2024-11-26 19:08:40.930715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.527 [2024-11-26 19:08:40.930720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.527 [2024-11-26 19:08:40.931939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.527 [2024-11-26 19:08:40.931940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 [2024-11-26 19:08:41.081063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 [2024-11-26 19:08:41.101268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 NULL1 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 Delay0 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3580437 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:18.527 19:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:18.527 [2024-11-26 19:08:41.212192] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:20.425 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.425 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.425 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 starting I/O failed: -6 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 [2024-11-26 19:08:43.330533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10782c0 is same with the state(6) to be set 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Read completed with error (sct=0, sc=8) 00:07:20.425 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 starting I/O failed: -6 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 Write completed with error (sct=0, sc=8) 00:07:20.426 Read completed with error (sct=0, sc=8) 00:07:20.426 [2024-11-26 19:08:43.331543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9e6000d4b0 is same with the state(6) to be set 00:07:21.359 [2024-11-26 19:08:44.305839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10799b0 is same with the state(6) to be set 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 [2024-11-26 19:08:44.331986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9e60000c40 is same with the state(6) to be set 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 [2024-11-26 19:08:44.332229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9e6000d020 is same with the state(6) to be set 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 [2024-11-26 19:08:44.332512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9e6000d7e0 is same with the state(6) to be set 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Read completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.359 Write completed with error (sct=0, sc=8) 00:07:21.360 Write completed with error (sct=0, sc=8) 00:07:21.360 Read completed with error (sct=0, sc=8) 00:07:21.360 Write completed with error (sct=0, sc=8) 00:07:21.360 Read completed with error (sct=0, sc=8) 00:07:21.360 Read completed with error (sct=0, sc=8) 00:07:21.360 Read completed with error (sct=0, sc=8) 00:07:21.360 Write completed with error (sct=0, sc=8) 00:07:21.360 [2024-11-26 19:08:44.334036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10784a0 is same with the state(6) to be set 00:07:21.360 Initializing NVMe Controllers 00:07:21.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:21.360 Controller IO queue size 128, less than required. 00:07:21.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:21.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:21.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:21.360 Initialization complete. Launching workers. 00:07:21.360 ======================================================== 00:07:21.360 Latency(us) 00:07:21.360 Device Information : IOPS MiB/s Average min max 00:07:21.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.08 0.08 865844.71 266.34 1008758.19 00:07:21.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.06 0.08 1143596.81 1837.64 2001284.67 00:07:21.360 ======================================================== 00:07:21.360 Total : 321.14 0.16 1006870.55 266.34 2001284.67 00:07:21.360 00:07:21.360 [2024-11-26 19:08:44.334412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10799b0 (9): Bad file descriptor 00:07:21.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:21.360 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.360 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:21.360 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3580437 00:07:21.360 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3580437 00:07:21.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3580437) - No such process 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3580437 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3580437 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3580437 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.927 [2024-11-26 19:08:44.860837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3580956 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3580956 00:07:21.927 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.927 [2024-11-26 19:08:44.943470] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:22.491 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:22.491 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3580956 00:07:22.491 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.055 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.055 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3580956 00:07:23.055 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.313 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.313 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3580956 00:07:23.313 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.878 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.878 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3580956 00:07:23.878 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.443 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.443 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3580956 00:07:24.443 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.008 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.008 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3580956 00:07:25.008 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.266 Initializing NVMe Controllers 00:07:25.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:25.266 Controller IO queue size 128, less than required. 00:07:25.266 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:25.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:25.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:25.266 Initialization complete. Launching workers. 00:07:25.266 ======================================================== 00:07:25.266 Latency(us) 00:07:25.266 Device Information : IOPS MiB/s Average min max 00:07:25.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002126.21 1000128.12 1040293.25 00:07:25.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003890.44 1000150.20 1041449.44 00:07:25.266 ======================================================== 00:07:25.266 Total : 256.00 0.12 1003008.33 1000128.12 1041449.44 00:07:25.266 00:07:25.524 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.524 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3580956 00:07:25.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3580956) - No such process 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3580956 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.525 rmmod nvme_tcp 00:07:25.525 rmmod nvme_fabrics 00:07:25.525 rmmod nvme_keyring 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3580241 ']' 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3580241 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3580241 ']' 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3580241 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3580241 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3580241' 00:07:25.525 killing process with pid 3580241 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3580241 00:07:25.525 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3580241 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.784 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.686 19:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:27.686 00:07:27.686 real 0m16.174s 00:07:27.686 user 0m29.409s 00:07:27.686 sys 0m5.498s 00:07:27.686 19:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.686 19:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.686 ************************************ 00:07:27.686 END TEST nvmf_delete_subsystem 00:07:27.686 ************************************ 00:07:27.957 19:08:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:27.957 19:08:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.957 19:08:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.957 19:08:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.957 ************************************ 00:07:27.957 START TEST nvmf_host_management 00:07:27.957 ************************************ 00:07:27.957 19:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:27.957 * Looking for test storage... 00:07:27.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.957 19:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.957 19:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.957 19:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.957 --rc genhtml_branch_coverage=1 00:07:27.957 --rc genhtml_function_coverage=1 00:07:27.957 --rc genhtml_legend=1 00:07:27.957 --rc geninfo_all_blocks=1 00:07:27.957 --rc geninfo_unexecuted_blocks=1 00:07:27.957 00:07:27.957 ' 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.957 --rc genhtml_branch_coverage=1 00:07:27.957 --rc genhtml_function_coverage=1 00:07:27.957 --rc genhtml_legend=1 00:07:27.957 --rc geninfo_all_blocks=1 00:07:27.957 --rc geninfo_unexecuted_blocks=1 00:07:27.957 00:07:27.957 ' 00:07:27.957 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.957 --rc genhtml_branch_coverage=1 00:07:27.957 --rc genhtml_function_coverage=1 00:07:27.957 --rc genhtml_legend=1 00:07:27.957 --rc geninfo_all_blocks=1 00:07:27.958 --rc geninfo_unexecuted_blocks=1 00:07:27.958 00:07:27.958 ' 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.958 --rc genhtml_branch_coverage=1 00:07:27.958 --rc genhtml_function_coverage=1 00:07:27.958 --rc genhtml_legend=1 00:07:27.958 --rc geninfo_all_blocks=1 00:07:27.958 --rc geninfo_unexecuted_blocks=1 00:07:27.958 00:07:27.958 ' 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.958 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.959 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.959 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.959 19:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:34.528 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:34.529 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:34.529 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:34.529 Found net devices under 0000:86:00.0: cvl_0_0 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:34.529 Found net devices under 0000:86:00.1: cvl_0_1 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.529 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.529 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.529 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.529 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.529 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:07:34.529 00:07:34.529 --- 10.0.0.2 ping statistics --- 00:07:34.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.529 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:07:34.529 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:07:34.530 00:07:34.530 --- 10.0.0.1 ping statistics --- 00:07:34.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.530 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3585190 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3585190 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3585190 ']' 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.530 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.530 [2024-11-26 19:08:57.174001] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:07:34.530 [2024-11-26 19:08:57.174052] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.530 [2024-11-26 19:08:57.253087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.530 [2024-11-26 19:08:57.296293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.530 [2024-11-26 19:08:57.296332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.530 [2024-11-26 19:08:57.296339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.530 [2024-11-26 19:08:57.296345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.530 [2024-11-26 19:08:57.296350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.530 [2024-11-26 19:08:57.297869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.530 [2024-11-26 19:08:57.297974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.530 [2024-11-26 19:08:57.298080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.530 [2024-11-26 19:08:57.298081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:35.095 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.095 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:35.095 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.095 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.095 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.095 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.095 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.095 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.096 [2024-11-26 19:08:58.050134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.096 Malloc0 00:07:35.096 [2024-11-26 19:08:58.118534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3585459 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3585459 /var/tmp/bdevperf.sock 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3585459 ']' 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.096 { 00:07:35.096 "params": { 00:07:35.096 "name": "Nvme$subsystem", 00:07:35.096 "trtype": "$TEST_TRANSPORT", 00:07:35.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.096 "adrfam": "ipv4", 00:07:35.096 "trsvcid": "$NVMF_PORT", 00:07:35.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.096 "hdgst": ${hdgst:-false}, 00:07:35.096 "ddgst": ${ddgst:-false} 00:07:35.096 }, 00:07:35.096 "method": "bdev_nvme_attach_controller" 00:07:35.096 } 00:07:35.096 EOF 00:07:35.096 )") 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:35.096 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.096 "params": { 00:07:35.096 "name": "Nvme0", 00:07:35.096 "trtype": "tcp", 00:07:35.096 "traddr": "10.0.0.2", 00:07:35.096 "adrfam": "ipv4", 00:07:35.096 "trsvcid": "4420", 00:07:35.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:35.096 "hdgst": false, 00:07:35.096 "ddgst": false 00:07:35.096 }, 00:07:35.096 "method": "bdev_nvme_attach_controller" 00:07:35.096 }' 00:07:35.353 [2024-11-26 19:08:58.213746] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:07:35.353 [2024-11-26 19:08:58.213794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585459 ] 00:07:35.353 [2024-11-26 19:08:58.286975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.353 [2024-11-26 19:08:58.328513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.611 Running I/O for 10 seconds... 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:35.869 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.869 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:35.869 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:35.869 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.127 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.127 [2024-11-26 19:08:59.057938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.057975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.057991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.057998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.127 [2024-11-26 19:08:59.058260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.127 [2024-11-26 19:08:59.058268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.128 [2024-11-26 19:08:59.058711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.128 [2024-11-26 19:08:59.058720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.058930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.129 [2024-11-26 19:08:59.058936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.129 [2024-11-26 19:08:59.059901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:36.129 task offset: 104960 on job bdev=Nvme0n1 fails 00:07:36.129 00:07:36.129 Latency(us) 00:07:36.129 [2024-11-26T18:08:59.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.129 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:36.129 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:36.129 Verification LBA range: start 0x0 length 0x400 00:07:36.129 Nvme0n1 : 0.40 1919.43 119.96 159.95 0.00 29957.36 1529.17 26838.55 00:07:36.129 [2024-11-26T18:08:59.243Z] =================================================================================================================== 00:07:36.129 [2024-11-26T18:08:59.243Z] Total : 1919.43 119.96 159.95 0.00 29957.36 1529.17 26838.55 00:07:36.129 [2024-11-26 19:08:59.062335] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.129 [2024-11-26 19:08:59.062367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb5510 (9): Bad file descriptor 00:07:36.129 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.129 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:36.129 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.129 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.129 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.129 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:36.129 [2024-11-26 19:08:59.073429] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:37.058 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3585459 00:07:37.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3585459) - No such process 00:07:37.058 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:37.058 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:37.058 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:37.059 { 00:07:37.059 "params": { 00:07:37.059 "name": "Nvme$subsystem", 00:07:37.059 "trtype": "$TEST_TRANSPORT", 00:07:37.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:37.059 "adrfam": "ipv4", 00:07:37.059 "trsvcid": "$NVMF_PORT", 00:07:37.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:37.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:37.059 "hdgst": ${hdgst:-false}, 00:07:37.059 "ddgst": ${ddgst:-false} 00:07:37.059 }, 00:07:37.059 "method": "bdev_nvme_attach_controller" 00:07:37.059 } 00:07:37.059 EOF 00:07:37.059 )") 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:37.059 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:37.059 "params": { 00:07:37.059 "name": "Nvme0", 00:07:37.059 "trtype": "tcp", 00:07:37.059 "traddr": "10.0.0.2", 00:07:37.059 "adrfam": "ipv4", 00:07:37.059 "trsvcid": "4420", 00:07:37.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:37.059 "hdgst": false, 00:07:37.059 "ddgst": false 00:07:37.059 }, 00:07:37.059 "method": "bdev_nvme_attach_controller" 00:07:37.059 }' 00:07:37.059 [2024-11-26 19:09:00.127042] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:07:37.059 [2024-11-26 19:09:00.127092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585730 ] 00:07:37.315 [2024-11-26 19:09:00.203103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.315 [2024-11-26 19:09:00.244015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.571 Running I/O for 1 seconds... 00:07:38.499 2347.00 IOPS, 146.69 MiB/s 00:07:38.499 Latency(us) 00:07:38.499 [2024-11-26T18:09:01.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.499 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:38.499 Verification LBA range: start 0x0 length 0x400 00:07:38.499 Nvme0n1 : 1.01 2391.40 149.46 0.00 0.00 26268.15 2044.10 26963.38 00:07:38.499 [2024-11-26T18:09:01.613Z] =================================================================================================================== 00:07:38.499 [2024-11-26T18:09:01.613Z] Total : 2391.40 149.46 0.00 0.00 26268.15 2044.10 26963.38 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.756 rmmod nvme_tcp 00:07:38.756 rmmod nvme_fabrics 00:07:38.756 rmmod nvme_keyring 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3585190 ']' 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3585190 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3585190 ']' 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3585190 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3585190 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3585190' 00:07:38.756 killing process with pid 3585190 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3585190 00:07:38.756 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3585190 00:07:39.014 [2024-11-26 19:09:01.896007] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.014 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.917 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.917 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:40.917 00:07:40.917 real 0m13.158s 00:07:40.917 user 0m22.592s 00:07:40.917 sys 0m5.685s 00:07:40.917 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.917 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.917 ************************************ 00:07:40.917 END TEST nvmf_host_management 00:07:40.917 ************************************ 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.176 ************************************ 00:07:41.176 START TEST nvmf_lvol 00:07:41.176 ************************************ 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:41.176 * Looking for test storage... 00:07:41.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.176 --rc genhtml_branch_coverage=1 00:07:41.176 --rc genhtml_function_coverage=1 00:07:41.176 --rc genhtml_legend=1 00:07:41.176 --rc geninfo_all_blocks=1 00:07:41.176 --rc geninfo_unexecuted_blocks=1 00:07:41.176 00:07:41.176 ' 00:07:41.176 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.176 --rc genhtml_branch_coverage=1 00:07:41.176 --rc genhtml_function_coverage=1 00:07:41.177 --rc genhtml_legend=1 00:07:41.177 --rc geninfo_all_blocks=1 00:07:41.177 --rc geninfo_unexecuted_blocks=1 00:07:41.177 00:07:41.177 ' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.177 --rc genhtml_branch_coverage=1 00:07:41.177 --rc genhtml_function_coverage=1 00:07:41.177 --rc genhtml_legend=1 00:07:41.177 --rc geninfo_all_blocks=1 00:07:41.177 --rc geninfo_unexecuted_blocks=1 00:07:41.177 00:07:41.177 ' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.177 --rc genhtml_branch_coverage=1 00:07:41.177 --rc genhtml_function_coverage=1 00:07:41.177 --rc genhtml_legend=1 00:07:41.177 --rc geninfo_all_blocks=1 00:07:41.177 --rc geninfo_unexecuted_blocks=1 00:07:41.177 00:07:41.177 ' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.177 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.436 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:41.436 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:41.436 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.436 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.004 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.005 19:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:48.005 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:48.005 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:48.005 Found net devices under 0000:86:00.0: cvl_0_0 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:48.005 Found net devices under 0000:86:00.1: cvl_0_1 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.005 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:07:48.006 00:07:48.006 --- 10.0.0.2 ping statistics --- 00:07:48.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.006 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:07:48.006 00:07:48.006 --- 10.0.0.1 ping statistics --- 00:07:48.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.006 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3590023 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3590023 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3590023 ']' 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.006 [2024-11-26 19:09:10.358976] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:07:48.006 [2024-11-26 19:09:10.359021] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.006 [2024-11-26 19:09:10.437169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.006 [2024-11-26 19:09:10.478566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.006 [2024-11-26 19:09:10.478601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.006 [2024-11-26 19:09:10.478609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.006 [2024-11-26 19:09:10.478615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.006 [2024-11-26 19:09:10.478621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.006 [2024-11-26 19:09:10.479872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.006 [2024-11-26 19:09:10.479907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.006 [2024-11-26 19:09:10.479908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:48.006 [2024-11-26 19:09:10.789239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.006 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.006 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:48.006 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.265 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:48.265 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:48.523 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:48.782 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=06438f1f-30ab-41d5-b595-d5eb41acdc99 00:07:48.782 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 06438f1f-30ab-41d5-b595-d5eb41acdc99 lvol 20 00:07:48.782 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fcafffdf-7473-4f9d-a646-ab5140f8697f 00:07:48.782 19:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.041 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fcafffdf-7473-4f9d-a646-ab5140f8697f 00:07:49.299 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.299 [2024-11-26 19:09:12.397294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.558 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.558 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3590485 00:07:49.558 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:49.558 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:50.930 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fcafffdf-7473-4f9d-a646-ab5140f8697f MY_SNAPSHOT 00:07:50.930 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=007c9ca7-0b24-43b3-8315-9857bbcc921f 00:07:50.930 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fcafffdf-7473-4f9d-a646-ab5140f8697f 30 00:07:51.187 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 007c9ca7-0b24-43b3-8315-9857bbcc921f MY_CLONE 00:07:51.444 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=011abe28-1813-4441-bd2a-3e15eadfc3f4 00:07:51.444 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 011abe28-1813-4441-bd2a-3e15eadfc3f4 00:07:52.009 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3590485 00:08:00.118 Initializing NVMe Controllers 00:08:00.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:00.118 Controller IO queue size 128, less than required. 00:08:00.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:00.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:00.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:00.118 Initialization complete. Launching workers. 00:08:00.118 ======================================================== 00:08:00.118 Latency(us) 00:08:00.118 Device Information : IOPS MiB/s Average min max 00:08:00.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12074.60 47.17 10603.97 2148.83 52187.46 00:08:00.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11914.20 46.54 10743.52 3633.53 55847.31 00:08:00.118 ======================================================== 00:08:00.118 Total : 23988.80 93.71 10673.28 2148.83 55847.31 00:08:00.118 00:08:00.118 19:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.118 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fcafffdf-7473-4f9d-a646-ab5140f8697f 00:08:00.381 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 06438f1f-30ab-41d5-b595-d5eb41acdc99 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:00.651 rmmod nvme_tcp 00:08:00.651 rmmod nvme_fabrics 00:08:00.651 rmmod nvme_keyring 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3590023 ']' 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3590023 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3590023 ']' 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3590023 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3590023 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3590023' 00:08:00.651 killing process with pid 3590023 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3590023 00:08:00.651 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3590023 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.951 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.910 19:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.910 00:08:02.910 real 0m21.920s 00:08:02.910 user 1m2.756s 00:08:02.910 sys 0m7.659s 00:08:02.910 19:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.910 19:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 ************************************ 00:08:02.910 END TEST nvmf_lvol 00:08:02.910 ************************************ 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.171 ************************************ 00:08:03.171 START TEST nvmf_lvs_grow 00:08:03.171 ************************************ 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:03.171 * Looking for test storage... 00:08:03.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:03.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.171 --rc genhtml_branch_coverage=1 00:08:03.171 --rc genhtml_function_coverage=1 00:08:03.171 --rc genhtml_legend=1 00:08:03.171 --rc geninfo_all_blocks=1 00:08:03.171 --rc geninfo_unexecuted_blocks=1 00:08:03.171 00:08:03.171 ' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:03.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.171 --rc genhtml_branch_coverage=1 00:08:03.171 --rc genhtml_function_coverage=1 00:08:03.171 --rc genhtml_legend=1 00:08:03.171 --rc geninfo_all_blocks=1 00:08:03.171 --rc geninfo_unexecuted_blocks=1 00:08:03.171 00:08:03.171 ' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:03.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.171 --rc genhtml_branch_coverage=1 00:08:03.171 --rc genhtml_function_coverage=1 00:08:03.171 --rc genhtml_legend=1 00:08:03.171 --rc geninfo_all_blocks=1 00:08:03.171 --rc geninfo_unexecuted_blocks=1 00:08:03.171 00:08:03.171 ' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:03.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.171 --rc genhtml_branch_coverage=1 00:08:03.171 --rc genhtml_function_coverage=1 00:08:03.171 --rc genhtml_legend=1 00:08:03.171 --rc geninfo_all_blocks=1 00:08:03.171 --rc geninfo_unexecuted_blocks=1 00:08:03.171 00:08:03.171 ' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.171 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:03.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:03.172 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:09.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:09.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:09.770 Found net devices under 0000:86:00.0: cvl_0_0 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.770 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:09.771 Found net devices under 0000:86:00.1: cvl_0_1 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:08:09.771 00:08:09.771 --- 10.0.0.2 ping statistics --- 00:08:09.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.771 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:09.771 00:08:09.771 --- 10.0.0.1 ping statistics --- 00:08:09.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.771 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3595898 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3595898 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3595898 ']' 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.771 [2024-11-26 19:09:32.362177] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:09.771 [2024-11-26 19:09:32.362215] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.771 [2024-11-26 19:09:32.440194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.771 [2024-11-26 19:09:32.479555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.771 [2024-11-26 19:09:32.479590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.771 [2024-11-26 19:09:32.479597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.771 [2024-11-26 19:09:32.479604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.771 [2024-11-26 19:09:32.479608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.771 [2024-11-26 19:09:32.480166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:09.771 [2024-11-26 19:09:32.805859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.771 ************************************ 00:08:09.771 START TEST lvs_grow_clean 00:08:09.771 ************************************ 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.771 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.028 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:10.028 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:10.285 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:10.285 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:10.285 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:10.542 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:10.542 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:10.542 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f lvol 150 00:08:10.799 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=73342300-b021-4f76-896e-c108c903aa4e 00:08:10.799 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.799 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:10.799 [2024-11-26 19:09:33.837628] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:10.799 [2024-11-26 19:09:33.837699] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:10.799 true 00:08:10.799 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:10.799 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:11.055 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:11.055 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.311 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73342300-b021-4f76-896e-c108c903aa4e 00:08:11.569 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:11.569 [2024-11-26 19:09:34.591918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.569 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3596399 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3596399 /var/tmp/bdevperf.sock 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3596399 ']' 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.828 19:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:11.828 [2024-11-26 19:09:34.839765] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:11.828 [2024-11-26 19:09:34.839812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596399 ] 00:08:11.828 [2024-11-26 19:09:34.917420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.086 [2024-11-26 19:09:34.959857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.086 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.086 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:12.086 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:12.343 Nvme0n1 00:08:12.343 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:12.600 [ 00:08:12.600 { 00:08:12.600 "name": "Nvme0n1", 00:08:12.600 "aliases": [ 00:08:12.600 "73342300-b021-4f76-896e-c108c903aa4e" 00:08:12.600 ], 00:08:12.600 "product_name": "NVMe disk", 00:08:12.600 "block_size": 4096, 00:08:12.600 "num_blocks": 38912, 00:08:12.600 "uuid": "73342300-b021-4f76-896e-c108c903aa4e", 00:08:12.600 "numa_id": 1, 00:08:12.600 "assigned_rate_limits": { 00:08:12.600 "rw_ios_per_sec": 0, 00:08:12.600 "rw_mbytes_per_sec": 0, 00:08:12.600 "r_mbytes_per_sec": 0, 00:08:12.600 "w_mbytes_per_sec": 0 00:08:12.600 }, 00:08:12.600 "claimed": false, 00:08:12.600 "zoned": false, 00:08:12.600 "supported_io_types": { 00:08:12.600 "read": true, 00:08:12.600 "write": true, 00:08:12.600 "unmap": true, 00:08:12.600 "flush": true, 00:08:12.600 "reset": true, 00:08:12.600 "nvme_admin": true, 00:08:12.600 "nvme_io": true, 00:08:12.600 "nvme_io_md": false, 00:08:12.600 "write_zeroes": true, 00:08:12.600 "zcopy": false, 00:08:12.600 "get_zone_info": false, 00:08:12.600 "zone_management": false, 00:08:12.600 "zone_append": false, 00:08:12.600 "compare": true, 00:08:12.600 "compare_and_write": true, 00:08:12.600 "abort": true, 00:08:12.600 "seek_hole": false, 00:08:12.600 "seek_data": false, 00:08:12.600 "copy": true, 00:08:12.600 "nvme_iov_md": false 00:08:12.600 }, 00:08:12.601 "memory_domains": [ 00:08:12.601 { 00:08:12.601 "dma_device_id": "system", 00:08:12.601 "dma_device_type": 1 00:08:12.601 } 00:08:12.601 ], 00:08:12.601 "driver_specific": { 00:08:12.601 "nvme": [ 00:08:12.601 { 00:08:12.601 "trid": { 00:08:12.601 "trtype": "TCP", 00:08:12.601 "adrfam": "IPv4", 00:08:12.601 "traddr": "10.0.0.2", 00:08:12.601 "trsvcid": "4420", 00:08:12.601 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:12.601 }, 00:08:12.601 "ctrlr_data": { 00:08:12.601 "cntlid": 1, 00:08:12.601 "vendor_id": "0x8086", 00:08:12.601 "model_number": "SPDK bdev Controller", 00:08:12.601 "serial_number": "SPDK0", 00:08:12.601 "firmware_revision": "25.01", 00:08:12.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.601 "oacs": { 00:08:12.601 "security": 0, 00:08:12.601 "format": 0, 00:08:12.601 "firmware": 0, 00:08:12.601 "ns_manage": 0 00:08:12.601 }, 00:08:12.601 "multi_ctrlr": true, 00:08:12.601 "ana_reporting": false 00:08:12.601 }, 00:08:12.601 "vs": { 00:08:12.601 "nvme_version": "1.3" 00:08:12.601 }, 00:08:12.601 "ns_data": { 00:08:12.601 "id": 1, 00:08:12.601 "can_share": true 00:08:12.601 } 00:08:12.601 } 00:08:12.601 ], 00:08:12.601 "mp_policy": "active_passive" 00:08:12.601 } 00:08:12.601 } 00:08:12.601 ] 00:08:12.601 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3596627 00:08:12.601 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:12.601 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:12.601 Running I/O for 10 seconds... 00:08:13.972 Latency(us) 00:08:13.972 [2024-11-26T18:09:37.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.972 Nvme0n1 : 1.00 22342.00 87.27 0.00 0.00 0.00 0.00 0.00 00:08:13.972 [2024-11-26T18:09:37.086Z] =================================================================================================================== 00:08:13.972 [2024-11-26T18:09:37.086Z] Total : 22342.00 87.27 0.00 0.00 0.00 0.00 0.00 00:08:13.972 00:08:14.539 19:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:14.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.797 Nvme0n1 : 2.00 22523.00 87.98 0.00 0.00 0.00 0.00 0.00 00:08:14.797 [2024-11-26T18:09:37.911Z] =================================================================================================================== 00:08:14.797 [2024-11-26T18:09:37.911Z] Total : 22523.00 87.98 0.00 0.00 0.00 0.00 0.00 00:08:14.797 00:08:14.797 true 00:08:14.797 19:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:14.797 19:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:15.055 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:15.055 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:15.055 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3596627 00:08:15.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.619 Nvme0n1 : 3.00 22596.67 88.27 0.00 0.00 0.00 0.00 0.00 00:08:15.619 [2024-11-26T18:09:38.733Z] =================================================================================================================== 00:08:15.619 [2024-11-26T18:09:38.733Z] Total : 22596.67 88.27 0.00 0.00 0.00 0.00 0.00 00:08:15.619 00:08:16.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.993 Nvme0n1 : 4.00 22649.50 88.47 0.00 0.00 0.00 0.00 0.00 00:08:16.993 [2024-11-26T18:09:40.107Z] =================================================================================================================== 00:08:16.993 [2024-11-26T18:09:40.107Z] Total : 22649.50 88.47 0.00 0.00 0.00 0.00 0.00 00:08:16.993 00:08:17.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.928 Nvme0n1 : 5.00 22598.00 88.27 0.00 0.00 0.00 0.00 0.00 00:08:17.928 [2024-11-26T18:09:41.042Z] =================================================================================================================== 00:08:17.928 [2024-11-26T18:09:41.042Z] Total : 22598.00 88.27 0.00 0.00 0.00 0.00 0.00 00:08:17.928 00:08:18.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.866 Nvme0n1 : 6.00 22658.33 88.51 0.00 0.00 0.00 0.00 0.00 00:08:18.866 [2024-11-26T18:09:41.980Z] =================================================================================================================== 00:08:18.866 [2024-11-26T18:09:41.980Z] Total : 22658.33 88.51 0.00 0.00 0.00 0.00 0.00 00:08:18.866 00:08:19.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.802 Nvme0n1 : 7.00 22709.43 88.71 0.00 0.00 0.00 0.00 0.00 00:08:19.802 [2024-11-26T18:09:42.916Z] =================================================================================================================== 00:08:19.802 [2024-11-26T18:09:42.916Z] Total : 22709.43 88.71 0.00 0.00 0.00 0.00 0.00 00:08:19.802 00:08:20.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.736 Nvme0n1 : 8.00 22745.75 88.85 0.00 0.00 0.00 0.00 0.00 00:08:20.736 [2024-11-26T18:09:43.850Z] =================================================================================================================== 00:08:20.736 [2024-11-26T18:09:43.850Z] Total : 22745.75 88.85 0.00 0.00 0.00 0.00 0.00 00:08:20.736 00:08:21.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.672 Nvme0n1 : 9.00 22772.22 88.95 0.00 0.00 0.00 0.00 0.00 00:08:21.672 [2024-11-26T18:09:44.786Z] =================================================================================================================== 00:08:21.672 [2024-11-26T18:09:44.786Z] Total : 22772.22 88.95 0.00 0.00 0.00 0.00 0.00 00:08:21.672 00:08:23.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.048 Nvme0n1 : 10.00 22792.60 89.03 0.00 0.00 0.00 0.00 0.00 00:08:23.048 [2024-11-26T18:09:46.162Z] =================================================================================================================== 00:08:23.048 [2024-11-26T18:09:46.162Z] Total : 22792.60 89.03 0.00 0.00 0.00 0.00 0.00 00:08:23.048 00:08:23.048 00:08:23.048 Latency(us) 00:08:23.048 [2024-11-26T18:09:46.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.048 Nvme0n1 : 10.01 22792.72 89.03 0.00 0.00 5611.80 4306.65 14979.66 00:08:23.048 [2024-11-26T18:09:46.162Z] =================================================================================================================== 00:08:23.048 [2024-11-26T18:09:46.162Z] Total : 22792.72 89.03 0.00 0.00 5611.80 4306.65 14979.66 00:08:23.048 { 00:08:23.048 "results": [ 00:08:23.048 { 00:08:23.048 "job": "Nvme0n1", 00:08:23.048 "core_mask": "0x2", 00:08:23.048 "workload": "randwrite", 00:08:23.048 "status": "finished", 00:08:23.048 "queue_depth": 128, 00:08:23.048 "io_size": 4096, 00:08:23.048 "runtime": 10.005213, 00:08:23.048 "iops": 22792.718156025265, 00:08:23.048 "mibps": 89.03405529697369, 00:08:23.048 "io_failed": 0, 00:08:23.048 "io_timeout": 0, 00:08:23.048 "avg_latency_us": 5611.803634404588, 00:08:23.048 "min_latency_us": 4306.651428571428, 00:08:23.048 "max_latency_us": 14979.657142857142 00:08:23.048 } 00:08:23.048 ], 00:08:23.048 "core_count": 1 00:08:23.048 } 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3596399 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3596399 ']' 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3596399 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3596399 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3596399' 00:08:23.048 killing process with pid 3596399 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3596399 00:08:23.048 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.048 00:08:23.048 Latency(us) 00:08:23.048 [2024-11-26T18:09:46.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.048 [2024-11-26T18:09:46.162Z] =================================================================================================================== 00:08:23.048 [2024-11-26T18:09:46.162Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3596399 00:08:23.048 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.306 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:23.306 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:23.306 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:23.564 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:23.564 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:23.564 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.823 [2024-11-26 19:09:46.757385] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:23.823 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:24.082 request: 00:08:24.082 { 00:08:24.082 "uuid": "61b69934-18ae-4720-a9f1-0c4ea3b8428f", 00:08:24.082 "method": "bdev_lvol_get_lvstores", 00:08:24.082 "req_id": 1 00:08:24.082 } 00:08:24.082 Got JSON-RPC error response 00:08:24.082 response: 00:08:24.082 { 00:08:24.082 "code": -19, 00:08:24.082 "message": "No such device" 00:08:24.082 } 00:08:24.082 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:24.082 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.082 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.082 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.082 19:09:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.082 aio_bdev 00:08:24.082 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 73342300-b021-4f76-896e-c108c903aa4e 00:08:24.082 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=73342300-b021-4f76-896e-c108c903aa4e 00:08:24.082 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.082 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:24.082 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.082 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.082 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.341 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 73342300-b021-4f76-896e-c108c903aa4e -t 2000 00:08:24.600 [ 00:08:24.600 { 00:08:24.600 "name": "73342300-b021-4f76-896e-c108c903aa4e", 00:08:24.600 "aliases": [ 00:08:24.600 "lvs/lvol" 00:08:24.600 ], 00:08:24.600 "product_name": "Logical Volume", 00:08:24.600 "block_size": 4096, 00:08:24.600 "num_blocks": 38912, 00:08:24.600 "uuid": "73342300-b021-4f76-896e-c108c903aa4e", 00:08:24.600 "assigned_rate_limits": { 00:08:24.600 "rw_ios_per_sec": 0, 00:08:24.600 "rw_mbytes_per_sec": 0, 00:08:24.600 "r_mbytes_per_sec": 0, 00:08:24.600 "w_mbytes_per_sec": 0 00:08:24.600 }, 00:08:24.600 "claimed": false, 00:08:24.600 "zoned": false, 00:08:24.600 "supported_io_types": { 00:08:24.600 "read": true, 00:08:24.600 "write": true, 00:08:24.600 "unmap": true, 00:08:24.600 "flush": false, 00:08:24.600 "reset": true, 00:08:24.600 "nvme_admin": false, 00:08:24.600 "nvme_io": false, 00:08:24.600 "nvme_io_md": false, 00:08:24.600 "write_zeroes": true, 00:08:24.600 "zcopy": false, 00:08:24.600 "get_zone_info": false, 00:08:24.600 "zone_management": false, 00:08:24.600 "zone_append": false, 00:08:24.600 "compare": false, 00:08:24.600 "compare_and_write": false, 00:08:24.600 "abort": false, 00:08:24.600 "seek_hole": true, 00:08:24.600 "seek_data": true, 00:08:24.600 "copy": false, 00:08:24.600 "nvme_iov_md": false 00:08:24.600 }, 00:08:24.600 "driver_specific": { 00:08:24.600 "lvol": { 00:08:24.600 "lvol_store_uuid": "61b69934-18ae-4720-a9f1-0c4ea3b8428f", 00:08:24.600 "base_bdev": "aio_bdev", 00:08:24.600 "thin_provision": false, 00:08:24.600 "num_allocated_clusters": 38, 00:08:24.600 "snapshot": false, 00:08:24.600 "clone": false, 00:08:24.600 "esnap_clone": false 00:08:24.600 } 00:08:24.600 } 00:08:24.600 } 00:08:24.600 ] 00:08:24.600 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:24.600 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:24.600 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:24.859 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:24.859 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:24.859 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:24.859 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:24.859 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 73342300-b021-4f76-896e-c108c903aa4e 00:08:25.118 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61b69934-18ae-4720-a9f1-0c4ea3b8428f 00:08:25.377 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.377 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.377 00:08:25.377 real 0m15.618s 00:08:25.377 user 0m15.134s 00:08:25.377 sys 0m1.560s 00:08:25.377 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.377 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:25.377 ************************************ 00:08:25.377 END TEST lvs_grow_clean 00:08:25.377 ************************************ 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.635 ************************************ 00:08:25.635 START TEST lvs_grow_dirty 00:08:25.635 ************************************ 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.635 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.894 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:25.894 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:25.894 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:25.894 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:25.894 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:26.153 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:26.153 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:26.153 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b lvol 150 00:08:26.412 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=81870bff-4255-497d-b012-f10c03ce8656 00:08:26.412 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.412 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:26.412 [2024-11-26 19:09:49.497556] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:26.412 [2024-11-26 19:09:49.497609] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:26.412 true 00:08:26.412 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:26.412 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.671 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.671 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.929 19:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 81870bff-4255-497d-b012-f10c03ce8656 00:08:27.187 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:27.187 [2024-11-26 19:09:50.215736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.187 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.445 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3599007 00:08:27.445 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.445 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.445 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3599007 /var/tmp/bdevperf.sock 00:08:27.445 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3599007 ']' 00:08:27.445 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.445 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.445 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.446 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.446 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.446 [2024-11-26 19:09:50.483783] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:27.446 [2024-11-26 19:09:50.483833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599007 ] 00:08:27.703 [2024-11-26 19:09:50.559897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.703 [2024-11-26 19:09:50.600231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.269 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.269 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:28.269 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:28.834 Nvme0n1 00:08:28.834 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:28.834 [ 00:08:28.834 { 00:08:28.834 "name": "Nvme0n1", 00:08:28.834 "aliases": [ 00:08:28.834 "81870bff-4255-497d-b012-f10c03ce8656" 00:08:28.834 ], 00:08:28.834 "product_name": "NVMe disk", 00:08:28.834 "block_size": 4096, 00:08:28.834 "num_blocks": 38912, 00:08:28.834 "uuid": "81870bff-4255-497d-b012-f10c03ce8656", 00:08:28.834 "numa_id": 1, 00:08:28.834 "assigned_rate_limits": { 00:08:28.834 "rw_ios_per_sec": 0, 00:08:28.834 "rw_mbytes_per_sec": 0, 00:08:28.834 "r_mbytes_per_sec": 0, 00:08:28.834 "w_mbytes_per_sec": 0 00:08:28.834 }, 00:08:28.834 "claimed": false, 00:08:28.834 "zoned": false, 00:08:28.834 "supported_io_types": { 00:08:28.834 "read": true, 00:08:28.834 "write": true, 00:08:28.834 "unmap": true, 00:08:28.834 "flush": true, 00:08:28.834 "reset": true, 00:08:28.834 "nvme_admin": true, 00:08:28.834 "nvme_io": true, 00:08:28.834 "nvme_io_md": false, 00:08:28.834 "write_zeroes": true, 00:08:28.834 "zcopy": false, 00:08:28.834 "get_zone_info": false, 00:08:28.834 "zone_management": false, 00:08:28.834 "zone_append": false, 00:08:28.834 "compare": true, 00:08:28.834 "compare_and_write": true, 00:08:28.834 "abort": true, 00:08:28.834 "seek_hole": false, 00:08:28.834 "seek_data": false, 00:08:28.834 "copy": true, 00:08:28.834 "nvme_iov_md": false 00:08:28.834 }, 00:08:28.834 "memory_domains": [ 00:08:28.834 { 00:08:28.834 "dma_device_id": "system", 00:08:28.834 "dma_device_type": 1 00:08:28.834 } 00:08:28.834 ], 00:08:28.834 "driver_specific": { 00:08:28.834 "nvme": [ 00:08:28.834 { 00:08:28.834 "trid": { 00:08:28.834 "trtype": "TCP", 00:08:28.834 "adrfam": "IPv4", 00:08:28.834 "traddr": "10.0.0.2", 00:08:28.834 "trsvcid": "4420", 00:08:28.834 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:28.834 }, 00:08:28.834 "ctrlr_data": { 00:08:28.834 "cntlid": 1, 00:08:28.834 "vendor_id": "0x8086", 00:08:28.834 "model_number": "SPDK bdev Controller", 00:08:28.834 "serial_number": "SPDK0", 00:08:28.834 "firmware_revision": "25.01", 00:08:28.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.834 "oacs": { 00:08:28.834 "security": 0, 00:08:28.834 "format": 0, 00:08:28.834 "firmware": 0, 00:08:28.834 "ns_manage": 0 00:08:28.834 }, 00:08:28.834 "multi_ctrlr": true, 00:08:28.834 "ana_reporting": false 00:08:28.834 }, 00:08:28.834 "vs": { 00:08:28.834 "nvme_version": "1.3" 00:08:28.834 }, 00:08:28.834 "ns_data": { 00:08:28.834 "id": 1, 00:08:28.834 "can_share": true 00:08:28.834 } 00:08:28.834 } 00:08:28.834 ], 00:08:28.834 "mp_policy": "active_passive" 00:08:28.834 } 00:08:28.834 } 00:08:28.835 ] 00:08:28.835 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.835 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3599250 00:08:28.835 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.092 Running I/O for 10 seconds... 00:08:30.024 Latency(us) 00:08:30.024 [2024-11-26T18:09:53.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.024 Nvme0n1 : 1.00 23245.00 90.80 0.00 0.00 0.00 0.00 0.00 00:08:30.024 [2024-11-26T18:09:53.138Z] =================================================================================================================== 00:08:30.024 [2024-11-26T18:09:53.138Z] Total : 23245.00 90.80 0.00 0.00 0.00 0.00 0.00 00:08:30.024 00:08:30.956 19:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:30.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.956 Nvme0n1 : 2.00 23405.00 91.43 0.00 0.00 0.00 0.00 0.00 00:08:30.956 [2024-11-26T18:09:54.070Z] =================================================================================================================== 00:08:30.956 [2024-11-26T18:09:54.070Z] Total : 23405.00 91.43 0.00 0.00 0.00 0.00 0.00 00:08:30.956 00:08:31.214 true 00:08:31.214 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:31.214 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:31.472 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:31.472 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:31.472 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3599250 00:08:32.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.037 Nvme0n1 : 3.00 23449.33 91.60 0.00 0.00 0.00 0.00 0.00 00:08:32.037 [2024-11-26T18:09:55.151Z] =================================================================================================================== 00:08:32.037 [2024-11-26T18:09:55.151Z] Total : 23449.33 91.60 0.00 0.00 0.00 0.00 0.00 00:08:32.037 00:08:32.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.971 Nvme0n1 : 4.00 23527.50 91.90 0.00 0.00 0.00 0.00 0.00 00:08:32.971 [2024-11-26T18:09:56.085Z] =================================================================================================================== 00:08:32.971 [2024-11-26T18:09:56.085Z] Total : 23527.50 91.90 0.00 0.00 0.00 0.00 0.00 00:08:32.971 00:08:33.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.905 Nvme0n1 : 5.00 23562.80 92.04 0.00 0.00 0.00 0.00 0.00 00:08:33.905 [2024-11-26T18:09:57.019Z] =================================================================================================================== 00:08:33.905 [2024-11-26T18:09:57.019Z] Total : 23562.80 92.04 0.00 0.00 0.00 0.00 0.00 00:08:33.905 00:08:35.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.278 Nvme0n1 : 6.00 23608.17 92.22 0.00 0.00 0.00 0.00 0.00 00:08:35.278 [2024-11-26T18:09:58.392Z] =================================================================================================================== 00:08:35.278 [2024-11-26T18:09:58.392Z] Total : 23608.17 92.22 0.00 0.00 0.00 0.00 0.00 00:08:35.278 00:08:36.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.209 Nvme0n1 : 7.00 23639.86 92.34 0.00 0.00 0.00 0.00 0.00 00:08:36.209 [2024-11-26T18:09:59.323Z] =================================================================================================================== 00:08:36.209 [2024-11-26T18:09:59.323Z] Total : 23639.86 92.34 0.00 0.00 0.00 0.00 0.00 00:08:36.209 00:08:37.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.142 Nvme0n1 : 8.00 23658.38 92.42 0.00 0.00 0.00 0.00 0.00 00:08:37.142 [2024-11-26T18:10:00.256Z] =================================================================================================================== 00:08:37.142 [2024-11-26T18:10:00.256Z] Total : 23658.38 92.42 0.00 0.00 0.00 0.00 0.00 00:08:37.142 00:08:38.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.075 Nvme0n1 : 9.00 23635.22 92.33 0.00 0.00 0.00 0.00 0.00 00:08:38.075 [2024-11-26T18:10:01.189Z] =================================================================================================================== 00:08:38.075 [2024-11-26T18:10:01.189Z] Total : 23635.22 92.33 0.00 0.00 0.00 0.00 0.00 00:08:38.075 00:08:39.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.007 Nvme0n1 : 10.00 23659.60 92.42 0.00 0.00 0.00 0.00 0.00 00:08:39.007 [2024-11-26T18:10:02.121Z] =================================================================================================================== 00:08:39.007 [2024-11-26T18:10:02.121Z] Total : 23659.60 92.42 0.00 0.00 0.00 0.00 0.00 00:08:39.007 00:08:39.007 00:08:39.007 Latency(us) 00:08:39.007 [2024-11-26T18:10:02.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.007 Nvme0n1 : 10.01 23658.60 92.42 0.00 0.00 5406.64 3105.16 14979.66 00:08:39.007 [2024-11-26T18:10:02.121Z] =================================================================================================================== 00:08:39.007 [2024-11-26T18:10:02.121Z] Total : 23658.60 92.42 0.00 0.00 5406.64 3105.16 14979.66 00:08:39.007 { 00:08:39.007 "results": [ 00:08:39.007 { 00:08:39.007 "job": "Nvme0n1", 00:08:39.007 "core_mask": "0x2", 00:08:39.007 "workload": "randwrite", 00:08:39.007 "status": "finished", 00:08:39.007 "queue_depth": 128, 00:08:39.007 "io_size": 4096, 00:08:39.007 "runtime": 10.005835, 00:08:39.007 "iops": 23658.595209695144, 00:08:39.007 "mibps": 92.41638753787166, 00:08:39.007 "io_failed": 0, 00:08:39.007 "io_timeout": 0, 00:08:39.007 "avg_latency_us": 5406.642713934089, 00:08:39.007 "min_latency_us": 3105.158095238095, 00:08:39.007 "max_latency_us": 14979.657142857142 00:08:39.007 } 00:08:39.007 ], 00:08:39.007 "core_count": 1 00:08:39.007 } 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3599007 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3599007 ']' 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3599007 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599007 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599007' 00:08:39.007 killing process with pid 3599007 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3599007 00:08:39.007 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.007 00:08:39.007 Latency(us) 00:08:39.007 [2024-11-26T18:10:02.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.007 [2024-11-26T18:10:02.121Z] =================================================================================================================== 00:08:39.007 [2024-11-26T18:10:02.121Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.007 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3599007 00:08:39.266 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.525 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3595898 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3595898 00:08:39.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3595898 Killed "${NVMF_APP[@]}" "$@" 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3601090 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3601090 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3601090 ']' 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.784 19:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.043 [2024-11-26 19:10:02.935080] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:40.043 [2024-11-26 19:10:02.935124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.043 [2024-11-26 19:10:03.011305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.043 [2024-11-26 19:10:03.049077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.043 [2024-11-26 19:10:03.049109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.043 [2024-11-26 19:10:03.049118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.043 [2024-11-26 19:10:03.049123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.043 [2024-11-26 19:10:03.049128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.043 [2024-11-26 19:10:03.049658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.043 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.043 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:40.043 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.043 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.043 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.301 [2024-11-26 19:10:03.372695] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:40.301 [2024-11-26 19:10:03.372798] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:40.301 [2024-11-26 19:10:03.372824] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 81870bff-4255-497d-b012-f10c03ce8656 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=81870bff-4255-497d-b012-f10c03ce8656 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.301 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.559 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 81870bff-4255-497d-b012-f10c03ce8656 -t 2000 00:08:40.817 [ 00:08:40.817 { 00:08:40.817 "name": "81870bff-4255-497d-b012-f10c03ce8656", 00:08:40.817 "aliases": [ 00:08:40.817 "lvs/lvol" 00:08:40.817 ], 00:08:40.817 "product_name": "Logical Volume", 00:08:40.817 "block_size": 4096, 00:08:40.817 "num_blocks": 38912, 00:08:40.817 "uuid": "81870bff-4255-497d-b012-f10c03ce8656", 00:08:40.817 "assigned_rate_limits": { 00:08:40.817 "rw_ios_per_sec": 0, 00:08:40.817 "rw_mbytes_per_sec": 0, 00:08:40.817 "r_mbytes_per_sec": 0, 00:08:40.817 "w_mbytes_per_sec": 0 00:08:40.817 }, 00:08:40.817 "claimed": false, 00:08:40.817 "zoned": false, 00:08:40.817 "supported_io_types": { 00:08:40.817 "read": true, 00:08:40.817 "write": true, 00:08:40.817 "unmap": true, 00:08:40.817 "flush": false, 00:08:40.817 "reset": true, 00:08:40.817 "nvme_admin": false, 00:08:40.817 "nvme_io": false, 00:08:40.817 "nvme_io_md": false, 00:08:40.817 "write_zeroes": true, 00:08:40.817 "zcopy": false, 00:08:40.817 "get_zone_info": false, 00:08:40.817 "zone_management": false, 00:08:40.817 "zone_append": false, 00:08:40.817 "compare": false, 00:08:40.817 "compare_and_write": false, 00:08:40.817 "abort": false, 00:08:40.817 "seek_hole": true, 00:08:40.817 "seek_data": true, 00:08:40.817 "copy": false, 00:08:40.817 "nvme_iov_md": false 00:08:40.817 }, 00:08:40.817 "driver_specific": { 00:08:40.817 "lvol": { 00:08:40.817 "lvol_store_uuid": "5ec44c8d-24ec-46fa-8012-b747c5ec479b", 00:08:40.817 "base_bdev": "aio_bdev", 00:08:40.817 "thin_provision": false, 00:08:40.817 "num_allocated_clusters": 38, 00:08:40.817 "snapshot": false, 00:08:40.817 "clone": false, 00:08:40.817 "esnap_clone": false 00:08:40.817 } 00:08:40.817 } 00:08:40.817 } 00:08:40.817 ] 00:08:40.818 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:40.818 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:40.818 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:41.076 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:41.076 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:41.076 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:41.076 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:41.076 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.334 [2024-11-26 19:10:04.301411] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:41.334 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:41.592 request: 00:08:41.592 { 00:08:41.592 "uuid": "5ec44c8d-24ec-46fa-8012-b747c5ec479b", 00:08:41.592 "method": "bdev_lvol_get_lvstores", 00:08:41.592 "req_id": 1 00:08:41.592 } 00:08:41.592 Got JSON-RPC error response 00:08:41.592 response: 00:08:41.592 { 00:08:41.592 "code": -19, 00:08:41.592 "message": "No such device" 00:08:41.592 } 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.592 aio_bdev 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 81870bff-4255-497d-b012-f10c03ce8656 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=81870bff-4255-497d-b012-f10c03ce8656 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.592 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.593 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.850 19:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 81870bff-4255-497d-b012-f10c03ce8656 -t 2000 00:08:42.107 [ 00:08:42.107 { 00:08:42.107 "name": "81870bff-4255-497d-b012-f10c03ce8656", 00:08:42.107 "aliases": [ 00:08:42.107 "lvs/lvol" 00:08:42.107 ], 00:08:42.107 "product_name": "Logical Volume", 00:08:42.107 "block_size": 4096, 00:08:42.107 "num_blocks": 38912, 00:08:42.107 "uuid": "81870bff-4255-497d-b012-f10c03ce8656", 00:08:42.107 "assigned_rate_limits": { 00:08:42.107 "rw_ios_per_sec": 0, 00:08:42.107 "rw_mbytes_per_sec": 0, 00:08:42.107 "r_mbytes_per_sec": 0, 00:08:42.107 "w_mbytes_per_sec": 0 00:08:42.107 }, 00:08:42.107 "claimed": false, 00:08:42.107 "zoned": false, 00:08:42.107 "supported_io_types": { 00:08:42.107 "read": true, 00:08:42.107 "write": true, 00:08:42.107 "unmap": true, 00:08:42.107 "flush": false, 00:08:42.107 "reset": true, 00:08:42.107 "nvme_admin": false, 00:08:42.107 "nvme_io": false, 00:08:42.107 "nvme_io_md": false, 00:08:42.107 "write_zeroes": true, 00:08:42.107 "zcopy": false, 00:08:42.107 "get_zone_info": false, 00:08:42.107 "zone_management": false, 00:08:42.107 "zone_append": false, 00:08:42.107 "compare": false, 00:08:42.107 "compare_and_write": false, 00:08:42.107 "abort": false, 00:08:42.107 "seek_hole": true, 00:08:42.107 "seek_data": true, 00:08:42.107 "copy": false, 00:08:42.107 "nvme_iov_md": false 00:08:42.107 }, 00:08:42.107 "driver_specific": { 00:08:42.107 "lvol": { 00:08:42.107 "lvol_store_uuid": "5ec44c8d-24ec-46fa-8012-b747c5ec479b", 00:08:42.107 "base_bdev": "aio_bdev", 00:08:42.107 "thin_provision": false, 00:08:42.107 "num_allocated_clusters": 38, 00:08:42.107 "snapshot": false, 00:08:42.107 "clone": false, 00:08:42.107 "esnap_clone": false 00:08:42.108 } 00:08:42.108 } 00:08:42.108 } 00:08:42.108 ] 00:08:42.108 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:42.108 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:42.108 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:42.365 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:42.365 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:42.365 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:42.365 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:42.365 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 81870bff-4255-497d-b012-f10c03ce8656 00:08:42.622 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ec44c8d-24ec-46fa-8012-b747c5ec479b 00:08:42.880 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.880 19:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.138 00:08:43.138 real 0m17.448s 00:08:43.138 user 0m44.910s 00:08:43.138 sys 0m3.975s 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.138 ************************************ 00:08:43.138 END TEST lvs_grow_dirty 00:08:43.138 ************************************ 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:43.138 nvmf_trace.0 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.138 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.139 rmmod nvme_tcp 00:08:43.139 rmmod nvme_fabrics 00:08:43.139 rmmod nvme_keyring 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3601090 ']' 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3601090 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3601090 ']' 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3601090 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3601090 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3601090' 00:08:43.139 killing process with pid 3601090 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3601090 00:08:43.139 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3601090 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.398 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.935 00:08:45.935 real 0m42.390s 00:08:45.935 user 1m5.565s 00:08:45.935 sys 0m10.547s 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.935 ************************************ 00:08:45.935 END TEST nvmf_lvs_grow 00:08:45.935 ************************************ 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.935 ************************************ 00:08:45.935 START TEST nvmf_bdev_io_wait 00:08:45.935 ************************************ 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.935 * Looking for test storage... 00:08:45.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.935 --rc genhtml_branch_coverage=1 00:08:45.935 --rc genhtml_function_coverage=1 00:08:45.935 --rc genhtml_legend=1 00:08:45.935 --rc geninfo_all_blocks=1 00:08:45.935 --rc geninfo_unexecuted_blocks=1 00:08:45.935 00:08:45.935 ' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.935 --rc genhtml_branch_coverage=1 00:08:45.935 --rc genhtml_function_coverage=1 00:08:45.935 --rc genhtml_legend=1 00:08:45.935 --rc geninfo_all_blocks=1 00:08:45.935 --rc geninfo_unexecuted_blocks=1 00:08:45.935 00:08:45.935 ' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.935 --rc genhtml_branch_coverage=1 00:08:45.935 --rc genhtml_function_coverage=1 00:08:45.935 --rc genhtml_legend=1 00:08:45.935 --rc geninfo_all_blocks=1 00:08:45.935 --rc geninfo_unexecuted_blocks=1 00:08:45.935 00:08:45.935 ' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.935 --rc genhtml_branch_coverage=1 00:08:45.935 --rc genhtml_function_coverage=1 00:08:45.935 --rc genhtml_legend=1 00:08:45.935 --rc geninfo_all_blocks=1 00:08:45.935 --rc geninfo_unexecuted_blocks=1 00:08:45.935 00:08:45.935 ' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.935 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.936 19:10:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:52.506 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:52.506 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:52.506 Found net devices under 0000:86:00.0: cvl_0_0 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.506 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:52.507 Found net devices under 0000:86:00.1: cvl_0_1 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:08:52.507 00:08:52.507 --- 10.0.0.2 ping statistics --- 00:08:52.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.507 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:08:52.507 00:08:52.507 --- 10.0.0.1 ping statistics --- 00:08:52.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.507 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3605373 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3605373 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3605373 ']' 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.507 [2024-11-26 19:10:14.801877] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:52.507 [2024-11-26 19:10:14.801931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.507 [2024-11-26 19:10:14.880205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.507 [2024-11-26 19:10:14.923659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.507 [2024-11-26 19:10:14.923697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.507 [2024-11-26 19:10:14.923705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.507 [2024-11-26 19:10:14.923710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.507 [2024-11-26 19:10:14.923715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.507 [2024-11-26 19:10:14.925254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.507 [2024-11-26 19:10:14.925365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.507 [2024-11-26 19:10:14.925489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.507 [2024-11-26 19:10:14.925491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.507 19:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.507 [2024-11-26 19:10:15.056728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.507 Malloc0 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.507 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.508 [2024-11-26 19:10:15.111724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3605405 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3605407 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.508 { 00:08:52.508 "params": { 00:08:52.508 "name": "Nvme$subsystem", 00:08:52.508 "trtype": "$TEST_TRANSPORT", 00:08:52.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.508 "adrfam": "ipv4", 00:08:52.508 "trsvcid": "$NVMF_PORT", 00:08:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.508 "hdgst": ${hdgst:-false}, 00:08:52.508 "ddgst": ${ddgst:-false} 00:08:52.508 }, 00:08:52.508 "method": "bdev_nvme_attach_controller" 00:08:52.508 } 00:08:52.508 EOF 00:08:52.508 )") 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3605409 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.508 { 00:08:52.508 "params": { 00:08:52.508 "name": "Nvme$subsystem", 00:08:52.508 "trtype": "$TEST_TRANSPORT", 00:08:52.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.508 "adrfam": "ipv4", 00:08:52.508 "trsvcid": "$NVMF_PORT", 00:08:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.508 "hdgst": ${hdgst:-false}, 00:08:52.508 "ddgst": ${ddgst:-false} 00:08:52.508 }, 00:08:52.508 "method": "bdev_nvme_attach_controller" 00:08:52.508 } 00:08:52.508 EOF 00:08:52.508 )") 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3605412 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.508 { 00:08:52.508 "params": { 00:08:52.508 "name": "Nvme$subsystem", 00:08:52.508 "trtype": "$TEST_TRANSPORT", 00:08:52.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.508 "adrfam": "ipv4", 00:08:52.508 "trsvcid": "$NVMF_PORT", 00:08:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.508 "hdgst": ${hdgst:-false}, 00:08:52.508 "ddgst": ${ddgst:-false} 00:08:52.508 }, 00:08:52.508 "method": "bdev_nvme_attach_controller" 00:08:52.508 } 00:08:52.508 EOF 00:08:52.508 )") 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.508 { 00:08:52.508 "params": { 00:08:52.508 "name": "Nvme$subsystem", 00:08:52.508 "trtype": "$TEST_TRANSPORT", 00:08:52.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.508 "adrfam": "ipv4", 00:08:52.508 "trsvcid": "$NVMF_PORT", 00:08:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.508 "hdgst": ${hdgst:-false}, 00:08:52.508 "ddgst": ${ddgst:-false} 00:08:52.508 }, 00:08:52.508 "method": "bdev_nvme_attach_controller" 00:08:52.508 } 00:08:52.508 EOF 00:08:52.508 )") 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3605405 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.508 "params": { 00:08:52.508 "name": "Nvme1", 00:08:52.508 "trtype": "tcp", 00:08:52.508 "traddr": "10.0.0.2", 00:08:52.508 "adrfam": "ipv4", 00:08:52.508 "trsvcid": "4420", 00:08:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.508 "hdgst": false, 00:08:52.508 "ddgst": false 00:08:52.508 }, 00:08:52.508 "method": "bdev_nvme_attach_controller" 00:08:52.508 }' 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.508 "params": { 00:08:52.508 "name": "Nvme1", 00:08:52.508 "trtype": "tcp", 00:08:52.508 "traddr": "10.0.0.2", 00:08:52.508 "adrfam": "ipv4", 00:08:52.508 "trsvcid": "4420", 00:08:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.508 "hdgst": false, 00:08:52.508 "ddgst": false 00:08:52.508 }, 00:08:52.508 "method": "bdev_nvme_attach_controller" 00:08:52.508 }' 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.508 "params": { 00:08:52.508 "name": "Nvme1", 00:08:52.508 "trtype": "tcp", 00:08:52.508 "traddr": "10.0.0.2", 00:08:52.508 "adrfam": "ipv4", 00:08:52.508 "trsvcid": "4420", 00:08:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.508 "hdgst": false, 00:08:52.508 "ddgst": false 00:08:52.508 }, 00:08:52.508 "method": "bdev_nvme_attach_controller" 00:08:52.508 }' 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.508 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.508 "params": { 00:08:52.508 "name": "Nvme1", 00:08:52.508 "trtype": "tcp", 00:08:52.508 "traddr": "10.0.0.2", 00:08:52.508 "adrfam": "ipv4", 00:08:52.508 "trsvcid": "4420", 00:08:52.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.508 "hdgst": false, 00:08:52.508 "ddgst": false 00:08:52.508 }, 00:08:52.508 "method": "bdev_nvme_attach_controller" 00:08:52.508 }' 00:08:52.508 [2024-11-26 19:10:15.162934] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:52.508 [2024-11-26 19:10:15.162935] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:52.508 [2024-11-26 19:10:15.162988] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 19:10:15.162989] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:52.508 --proc-type=auto ] 00:08:52.509 [2024-11-26 19:10:15.163295] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:52.509 [2024-11-26 19:10:15.163331] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:52.509 [2024-11-26 19:10:15.168080] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:08:52.509 [2024-11-26 19:10:15.168124] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:52.509 [2024-11-26 19:10:15.352244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.509 [2024-11-26 19:10:15.394769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.509 [2024-11-26 19:10:15.444922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.509 [2024-11-26 19:10:15.485158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:52.509 [2024-11-26 19:10:15.538311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.509 [2024-11-26 19:10:15.593415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:52.509 [2024-11-26 19:10:15.597370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.767 [2024-11-26 19:10:15.639898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:52.767 Running I/O for 1 seconds... 00:08:52.767 Running I/O for 1 seconds... 00:08:52.767 Running I/O for 1 seconds... 00:08:53.026 Running I/O for 1 seconds... 00:08:53.962 6749.00 IOPS, 26.36 MiB/s [2024-11-26T18:10:17.076Z] 12522.00 IOPS, 48.91 MiB/s 00:08:53.962 Latency(us) 00:08:53.962 [2024-11-26T18:10:17.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.962 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:53.962 Nvme1n1 : 1.02 6766.85 26.43 0.00 0.00 18741.95 6709.64 28586.18 00:08:53.962 [2024-11-26T18:10:17.076Z] =================================================================================================================== 00:08:53.962 [2024-11-26T18:10:17.076Z] Total : 6766.85 26.43 0.00 0.00 18741.95 6709.64 28586.18 00:08:53.962 00:08:53.962 Latency(us) 00:08:53.962 [2024-11-26T18:10:17.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.962 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:53.962 Nvme1n1 : 1.01 12584.07 49.16 0.00 0.00 10141.12 4337.86 18599.74 00:08:53.962 [2024-11-26T18:10:17.076Z] =================================================================================================================== 00:08:53.962 [2024-11-26T18:10:17.076Z] Total : 12584.07 49.16 0.00 0.00 10141.12 4337.86 18599.74 00:08:53.962 244664.00 IOPS, 955.72 MiB/s [2024-11-26T18:10:17.076Z] 6809.00 IOPS, 26.60 MiB/s 00:08:53.962 Latency(us) 00:08:53.962 [2024-11-26T18:10:17.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.962 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:53.962 Nvme1n1 : 1.00 244296.03 954.28 0.00 0.00 521.08 221.38 1497.97 00:08:53.962 [2024-11-26T18:10:17.076Z] =================================================================================================================== 00:08:53.962 [2024-11-26T18:10:17.076Z] Total : 244296.03 954.28 0.00 0.00 521.08 221.38 1497.97 00:08:53.962 00:08:53.962 Latency(us) 00:08:53.962 [2024-11-26T18:10:17.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.962 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:53.962 Nvme1n1 : 1.01 6938.72 27.10 0.00 0.00 18404.28 3542.06 39196.77 00:08:53.962 [2024-11-26T18:10:17.076Z] =================================================================================================================== 00:08:53.962 [2024-11-26T18:10:17.076Z] Total : 6938.72 27.10 0.00 0.00 18404.28 3542.06 39196.77 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3605407 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3605409 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3605412 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.962 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.962 rmmod nvme_tcp 00:08:54.221 rmmod nvme_fabrics 00:08:54.221 rmmod nvme_keyring 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3605373 ']' 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3605373 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3605373 ']' 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3605373 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3605373 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3605373' 00:08:54.221 killing process with pid 3605373 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3605373 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3605373 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.221 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:54.222 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.222 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:54.222 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.222 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.222 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.222 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.222 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.222 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.756 00:08:56.756 real 0m10.871s 00:08:56.756 user 0m16.730s 00:08:56.756 sys 0m6.183s 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.756 ************************************ 00:08:56.756 END TEST nvmf_bdev_io_wait 00:08:56.756 ************************************ 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.756 ************************************ 00:08:56.756 START TEST nvmf_queue_depth 00:08:56.756 ************************************ 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.756 * Looking for test storage... 00:08:56.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.756 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.757 --rc genhtml_branch_coverage=1 00:08:56.757 --rc genhtml_function_coverage=1 00:08:56.757 --rc genhtml_legend=1 00:08:56.757 --rc geninfo_all_blocks=1 00:08:56.757 --rc geninfo_unexecuted_blocks=1 00:08:56.757 00:08:56.757 ' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.757 --rc genhtml_branch_coverage=1 00:08:56.757 --rc genhtml_function_coverage=1 00:08:56.757 --rc genhtml_legend=1 00:08:56.757 --rc geninfo_all_blocks=1 00:08:56.757 --rc geninfo_unexecuted_blocks=1 00:08:56.757 00:08:56.757 ' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.757 --rc genhtml_branch_coverage=1 00:08:56.757 --rc genhtml_function_coverage=1 00:08:56.757 --rc genhtml_legend=1 00:08:56.757 --rc geninfo_all_blocks=1 00:08:56.757 --rc geninfo_unexecuted_blocks=1 00:08:56.757 00:08:56.757 ' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.757 --rc genhtml_branch_coverage=1 00:08:56.757 --rc genhtml_function_coverage=1 00:08:56.757 --rc genhtml_legend=1 00:08:56.757 --rc geninfo_all_blocks=1 00:08:56.757 --rc geninfo_unexecuted_blocks=1 00:08:56.757 00:08:56.757 ' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.757 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.758 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.758 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.758 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.758 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.758 19:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:03.330 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:03.330 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:03.330 Found net devices under 0000:86:00.0: cvl_0_0 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:03.330 Found net devices under 0000:86:00.1: cvl_0_1 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.330 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:09:03.331 00:09:03.331 --- 10.0.0.2 ping statistics --- 00:09:03.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.331 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:09:03.331 00:09:03.331 --- 10.0.0.1 ping statistics --- 00:09:03.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.331 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3609257 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3609257 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3609257 ']' 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 [2024-11-26 19:10:25.683804] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:09:03.331 [2024-11-26 19:10:25.683847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.331 [2024-11-26 19:10:25.746758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.331 [2024-11-26 19:10:25.791646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.331 [2024-11-26 19:10:25.791689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.331 [2024-11-26 19:10:25.791697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.331 [2024-11-26 19:10:25.791704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.331 [2024-11-26 19:10:25.791709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.331 [2024-11-26 19:10:25.792166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 [2024-11-26 19:10:25.937765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 Malloc0 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 [2024-11-26 19:10:25.988000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3609437 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3609437 /var/tmp/bdevperf.sock 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3609437 ']' 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.331 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 [2024-11-26 19:10:26.036583] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:09:03.331 [2024-11-26 19:10:26.036624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609437 ] 00:09:03.331 [2024-11-26 19:10:26.109656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.331 [2024-11-26 19:10:26.152058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.332 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.332 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.332 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.332 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.332 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.590 NVMe0n1 00:09:03.590 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.590 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.590 Running I/O for 10 seconds... 00:09:05.901 12000.00 IOPS, 46.88 MiB/s [2024-11-26T18:10:29.593Z] 12264.00 IOPS, 47.91 MiB/s [2024-11-26T18:10:30.968Z] 12282.33 IOPS, 47.98 MiB/s [2024-11-26T18:10:31.902Z] 12279.00 IOPS, 47.96 MiB/s [2024-11-26T18:10:32.836Z] 12362.80 IOPS, 48.29 MiB/s [2024-11-26T18:10:33.771Z] 12432.67 IOPS, 48.57 MiB/s [2024-11-26T18:10:34.707Z] 12434.57 IOPS, 48.57 MiB/s [2024-11-26T18:10:35.642Z] 12511.50 IOPS, 48.87 MiB/s [2024-11-26T18:10:37.018Z] 12494.22 IOPS, 48.81 MiB/s [2024-11-26T18:10:37.018Z] 12490.50 IOPS, 48.79 MiB/s 00:09:13.904 Latency(us) 00:09:13.904 [2024-11-26T18:10:37.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.904 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:13.904 Verification LBA range: start 0x0 length 0x4000 00:09:13.904 NVMe0n1 : 10.05 12525.36 48.93 0.00 0.00 81477.22 10360.93 54176.43 00:09:13.904 [2024-11-26T18:10:37.018Z] =================================================================================================================== 00:09:13.904 [2024-11-26T18:10:37.018Z] Total : 12525.36 48.93 0.00 0.00 81477.22 10360.93 54176.43 00:09:13.904 { 00:09:13.904 "results": [ 00:09:13.904 { 00:09:13.904 "job": "NVMe0n1", 00:09:13.904 "core_mask": "0x1", 00:09:13.904 "workload": "verify", 00:09:13.904 "status": "finished", 00:09:13.904 "verify_range": { 00:09:13.904 "start": 0, 00:09:13.904 "length": 16384 00:09:13.904 }, 00:09:13.904 "queue_depth": 1024, 00:09:13.904 "io_size": 4096, 00:09:13.904 "runtime": 10.050648, 00:09:13.904 "iops": 12525.361548827499, 00:09:13.904 "mibps": 48.92719355010742, 00:09:13.904 "io_failed": 0, 00:09:13.904 "io_timeout": 0, 00:09:13.904 "avg_latency_us": 81477.21885007384, 00:09:13.904 "min_latency_us": 10360.929523809524, 00:09:13.904 "max_latency_us": 54176.426666666666 00:09:13.904 } 00:09:13.904 ], 00:09:13.904 "core_count": 1 00:09:13.904 } 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3609437 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3609437 ']' 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3609437 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609437 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609437' 00:09:13.904 killing process with pid 3609437 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3609437 00:09:13.904 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.904 00:09:13.904 Latency(us) 00:09:13.904 [2024-11-26T18:10:37.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.904 [2024-11-26T18:10:37.018Z] =================================================================================================================== 00:09:13.904 [2024-11-26T18:10:37.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.904 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3609437 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.905 rmmod nvme_tcp 00:09:13.905 rmmod nvme_fabrics 00:09:13.905 rmmod nvme_keyring 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3609257 ']' 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3609257 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3609257 ']' 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3609257 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.905 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609257 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609257' 00:09:14.164 killing process with pid 3609257 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3609257 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3609257 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.164 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.241 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.241 00:09:16.241 real 0m19.808s 00:09:16.241 user 0m23.349s 00:09:16.241 sys 0m5.961s 00:09:16.241 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.241 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.241 ************************************ 00:09:16.241 END TEST nvmf_queue_depth 00:09:16.241 ************************************ 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.527 ************************************ 00:09:16.527 START TEST nvmf_target_multipath 00:09:16.527 ************************************ 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.527 * Looking for test storage... 00:09:16.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.527 --rc genhtml_branch_coverage=1 00:09:16.527 --rc genhtml_function_coverage=1 00:09:16.527 --rc genhtml_legend=1 00:09:16.527 --rc geninfo_all_blocks=1 00:09:16.527 --rc geninfo_unexecuted_blocks=1 00:09:16.527 00:09:16.527 ' 00:09:16.527 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.527 --rc genhtml_branch_coverage=1 00:09:16.527 --rc genhtml_function_coverage=1 00:09:16.528 --rc genhtml_legend=1 00:09:16.528 --rc geninfo_all_blocks=1 00:09:16.528 --rc geninfo_unexecuted_blocks=1 00:09:16.528 00:09:16.528 ' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.528 --rc genhtml_branch_coverage=1 00:09:16.528 --rc genhtml_function_coverage=1 00:09:16.528 --rc genhtml_legend=1 00:09:16.528 --rc geninfo_all_blocks=1 00:09:16.528 --rc geninfo_unexecuted_blocks=1 00:09:16.528 00:09:16.528 ' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.528 --rc genhtml_branch_coverage=1 00:09:16.528 --rc genhtml_function_coverage=1 00:09:16.528 --rc genhtml_legend=1 00:09:16.528 --rc geninfo_all_blocks=1 00:09:16.528 --rc geninfo_unexecuted_blocks=1 00:09:16.528 00:09:16.528 ' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.528 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:23.095 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:23.095 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:23.095 Found net devices under 0000:86:00.0: cvl_0_0 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.095 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:23.096 Found net devices under 0000:86:00.1: cvl_0_1 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:09:23.096 00:09:23.096 --- 10.0.0.2 ping statistics --- 00:09:23.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.096 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:23.096 00:09:23.096 --- 10.0.0.1 ping statistics --- 00:09:23.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.096 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:23.096 only one NIC for nvmf test 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.096 rmmod nvme_tcp 00:09:23.096 rmmod nvme_fabrics 00:09:23.096 rmmod nvme_keyring 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.096 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.014 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.014 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.015 00:09:25.015 real 0m8.398s 00:09:25.015 user 0m1.797s 00:09:25.015 sys 0m4.601s 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:25.015 ************************************ 00:09:25.015 END TEST nvmf_target_multipath 00:09:25.015 ************************************ 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.015 ************************************ 00:09:25.015 START TEST nvmf_zcopy 00:09:25.015 ************************************ 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.015 * Looking for test storage... 00:09:25.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.015 --rc genhtml_branch_coverage=1 00:09:25.015 --rc genhtml_function_coverage=1 00:09:25.015 --rc genhtml_legend=1 00:09:25.015 --rc geninfo_all_blocks=1 00:09:25.015 --rc geninfo_unexecuted_blocks=1 00:09:25.015 00:09:25.015 ' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.015 --rc genhtml_branch_coverage=1 00:09:25.015 --rc genhtml_function_coverage=1 00:09:25.015 --rc genhtml_legend=1 00:09:25.015 --rc geninfo_all_blocks=1 00:09:25.015 --rc geninfo_unexecuted_blocks=1 00:09:25.015 00:09:25.015 ' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.015 --rc genhtml_branch_coverage=1 00:09:25.015 --rc genhtml_function_coverage=1 00:09:25.015 --rc genhtml_legend=1 00:09:25.015 --rc geninfo_all_blocks=1 00:09:25.015 --rc geninfo_unexecuted_blocks=1 00:09:25.015 00:09:25.015 ' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.015 --rc genhtml_branch_coverage=1 00:09:25.015 --rc genhtml_function_coverage=1 00:09:25.015 --rc genhtml_legend=1 00:09:25.015 --rc geninfo_all_blocks=1 00:09:25.015 --rc geninfo_unexecuted_blocks=1 00:09:25.015 00:09:25.015 ' 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:25.015 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.015 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.016 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.587 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:31.588 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:31.588 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:31.588 Found net devices under 0000:86:00.0: cvl_0_0 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:31.588 Found net devices under 0000:86:00.1: cvl_0_1 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:09:31.588 00:09:31.588 --- 10.0.0.2 ping statistics --- 00:09:31.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.588 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:09:31.588 00:09:31.588 --- 10.0.0.1 ping statistics --- 00:09:31.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.588 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.588 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.588 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3618342 00:09:31.588 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.588 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3618342 00:09:31.588 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3618342 ']' 00:09:31.588 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.588 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 [2024-11-26 19:10:54.052697] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:09:31.589 [2024-11-26 19:10:54.052751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.589 [2024-11-26 19:10:54.130009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.589 [2024-11-26 19:10:54.168204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.589 [2024-11-26 19:10:54.168234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.589 [2024-11-26 19:10:54.168241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.589 [2024-11-26 19:10:54.168246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.589 [2024-11-26 19:10:54.168251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.589 [2024-11-26 19:10:54.168843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 [2024-11-26 19:10:54.316652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 [2024-11-26 19:10:54.336850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 malloc0 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:31.589 { 00:09:31.589 "params": { 00:09:31.589 "name": "Nvme$subsystem", 00:09:31.589 "trtype": "$TEST_TRANSPORT", 00:09:31.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.589 "adrfam": "ipv4", 00:09:31.589 "trsvcid": "$NVMF_PORT", 00:09:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.589 "hdgst": ${hdgst:-false}, 00:09:31.589 "ddgst": ${ddgst:-false} 00:09:31.589 }, 00:09:31.589 "method": "bdev_nvme_attach_controller" 00:09:31.589 } 00:09:31.589 EOF 00:09:31.589 )") 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:31.589 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:31.589 "params": { 00:09:31.589 "name": "Nvme1", 00:09:31.589 "trtype": "tcp", 00:09:31.589 "traddr": "10.0.0.2", 00:09:31.589 "adrfam": "ipv4", 00:09:31.589 "trsvcid": "4420", 00:09:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.589 "hdgst": false, 00:09:31.589 "ddgst": false 00:09:31.589 }, 00:09:31.589 "method": "bdev_nvme_attach_controller" 00:09:31.589 }' 00:09:31.589 [2024-11-26 19:10:54.419795] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:09:31.589 [2024-11-26 19:10:54.419838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618362 ] 00:09:31.589 [2024-11-26 19:10:54.493897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.589 [2024-11-26 19:10:54.534466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.848 Running I/O for 10 seconds... 00:09:33.719 8625.00 IOPS, 67.38 MiB/s [2024-11-26T18:10:57.769Z] 8711.00 IOPS, 68.05 MiB/s [2024-11-26T18:10:59.145Z] 8740.33 IOPS, 68.28 MiB/s [2024-11-26T18:11:00.078Z] 8772.00 IOPS, 68.53 MiB/s [2024-11-26T18:11:01.013Z] 8778.60 IOPS, 68.58 MiB/s [2024-11-26T18:11:01.948Z] 8771.83 IOPS, 68.53 MiB/s [2024-11-26T18:11:02.885Z] 8776.29 IOPS, 68.56 MiB/s [2024-11-26T18:11:03.820Z] 8786.25 IOPS, 68.64 MiB/s [2024-11-26T18:11:05.196Z] 8791.67 IOPS, 68.68 MiB/s [2024-11-26T18:11:05.196Z] 8796.00 IOPS, 68.72 MiB/s 00:09:42.082 Latency(us) 00:09:42.082 [2024-11-26T18:11:05.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.083 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:42.083 Verification LBA range: start 0x0 length 0x1000 00:09:42.083 Nvme1n1 : 10.01 8797.91 68.73 0.00 0.00 14508.04 2293.76 24217.11 00:09:42.083 [2024-11-26T18:11:05.197Z] =================================================================================================================== 00:09:42.083 [2024-11-26T18:11:05.197Z] Total : 8797.91 68.73 0.00 0.00 14508.04 2293.76 24217.11 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3620109 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.083 { 00:09:42.083 "params": { 00:09:42.083 "name": "Nvme$subsystem", 00:09:42.083 "trtype": "$TEST_TRANSPORT", 00:09:42.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.083 "adrfam": "ipv4", 00:09:42.083 "trsvcid": "$NVMF_PORT", 00:09:42.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.083 "hdgst": ${hdgst:-false}, 00:09:42.083 "ddgst": ${ddgst:-false} 00:09:42.083 }, 00:09:42.083 "method": "bdev_nvme_attach_controller" 00:09:42.083 } 00:09:42.083 EOF 00:09:42.083 )") 00:09:42.083 [2024-11-26 19:11:04.937039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:04.937070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:42.083 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.083 "params": { 00:09:42.083 "name": "Nvme1", 00:09:42.083 "trtype": "tcp", 00:09:42.083 "traddr": "10.0.0.2", 00:09:42.083 "adrfam": "ipv4", 00:09:42.083 "trsvcid": "4420", 00:09:42.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.083 "hdgst": false, 00:09:42.083 "ddgst": false 00:09:42.083 }, 00:09:42.083 "method": "bdev_nvme_attach_controller" 00:09:42.083 }' 00:09:42.083 [2024-11-26 19:11:04.949040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:04.949053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:04.961067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:04.961077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:04.973097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:04.973107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:04.977858] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:09:42.083 [2024-11-26 19:11:04.977898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620109 ] 00:09:42.083 [2024-11-26 19:11:04.985140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:04.985150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:04.997158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:04.997168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.009196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.009207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.021226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.021235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.033257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.033267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.045291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.045301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.051642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.083 [2024-11-26 19:11:05.057321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.057332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.069356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.069371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.081398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.081413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.093131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.083 [2024-11-26 19:11:05.093422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.093433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.105463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.105476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.117492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.117511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.129521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.129535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.141551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.141561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.153585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.153597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.165617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.165628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.177647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.177656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.083 [2024-11-26 19:11:05.189699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.083 [2024-11-26 19:11:05.189717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.201740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.201758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.213757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.213771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.225791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.225804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.237816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.237825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.281962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.281979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 Running I/O for 5 seconds... 00:09:42.341 [2024-11-26 19:11:05.293975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.293986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.306571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.306590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.317311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.317330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.331498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.331516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.345413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.345431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.359469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.359486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.341 [2024-11-26 19:11:05.372951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.341 [2024-11-26 19:11:05.372969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.342 [2024-11-26 19:11:05.386598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.342 [2024-11-26 19:11:05.386616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.342 [2024-11-26 19:11:05.400287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.342 [2024-11-26 19:11:05.400305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.342 [2024-11-26 19:11:05.414279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.342 [2024-11-26 19:11:05.414297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.342 [2024-11-26 19:11:05.428032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.342 [2024-11-26 19:11:05.428050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.342 [2024-11-26 19:11:05.441698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.342 [2024-11-26 19:11:05.441717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.455567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.455585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.469006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.469025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.482978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.482997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.496277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.496295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.510047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.510066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.519152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.519170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.533091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.533109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.546690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.546708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.560657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.560680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.574430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.574454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.588118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.588136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.601843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.601861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.615584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.615606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.629447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.629466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.643090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.643109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.656869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.656887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.671170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.671189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.685088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.685106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.600 [2024-11-26 19:11:05.699019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.600 [2024-11-26 19:11:05.699037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.712807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.712825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.727109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.727128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.741123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.741141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.754213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.754232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.768082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.768100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.781852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.781869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.795392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.795411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.809244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.809262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.822683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.822701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.836192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.836210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.850240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.850258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.863791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.863809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.877952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.877970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.891999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.892018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.906066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.906084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.919875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.919893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.933752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.933769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.946977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.946996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.859 [2024-11-26 19:11:05.961040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.859 [2024-11-26 19:11:05.961058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:05.974712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:05.974731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:05.989021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:05.989039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.002730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.002748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.016228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.016246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.029747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.029764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.043122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.043140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.052519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.052541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.066837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.066855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.080437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.080456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.094812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.094831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.108347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.108366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.122081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.122101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.136086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.136106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.149844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.149863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.163288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.163311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.177057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.118 [2024-11-26 19:11:06.177075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.118 [2024-11-26 19:11:06.186151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.119 [2024-11-26 19:11:06.186169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.119 [2024-11-26 19:11:06.200266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.119 [2024-11-26 19:11:06.200284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.119 [2024-11-26 19:11:06.213880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.119 [2024-11-26 19:11:06.213899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.119 [2024-11-26 19:11:06.228032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.119 [2024-11-26 19:11:06.228051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.237005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.237025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.246877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.246896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.260923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.260941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.274483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.274501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.288127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.288145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 16917.00 IOPS, 132.16 MiB/s [2024-11-26T18:11:06.491Z] [2024-11-26 19:11:06.301925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.301950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.315537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.315556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.329087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.329105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.342918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.342936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.356828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.356846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.370661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.370686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.384351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.384369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.398021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.398040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.411931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.411950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.425411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.425429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.439167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.439186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.453007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.453025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.462082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.462099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.377 [2024-11-26 19:11:06.476074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.377 [2024-11-26 19:11:06.476092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.489912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.489931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.503813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.503831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.517382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.517403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.531340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.531358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.545495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.545514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.559604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.559630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.572935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.572953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.586951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.586969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.600842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.600871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.614649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.614667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.628546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.628564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.642159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.642176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.655749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.655767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.669558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.669576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.683532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.683550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.697554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.697572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.711223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.711241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.725459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.725477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.636 [2024-11-26 19:11:06.736877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.636 [2024-11-26 19:11:06.736894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.751659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.751684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.762142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.762159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.777084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.777102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.788413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.788430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.798188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.798205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.812322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.812340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.825899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.825918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.840133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.840152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.851285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.851303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.860597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.860614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.875169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.875187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.889012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.889030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.902819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.902837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.916561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.916579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.930349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.930368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.944538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.944556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.954008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.954027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.968168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.968187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.981833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.981852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.895 [2024-11-26 19:11:06.995796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.895 [2024-11-26 19:11:06.995814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.009781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.009800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.023569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.023589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.037259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.037277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.051508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.051526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.062094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.062111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.076304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.076322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.090070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.090089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.103657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.103681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.117703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.117721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.131724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.131741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.145752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.145770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.159524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.159541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.169050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.169067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.178474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.178492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.192503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.192520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.205846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.205864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.219813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.219832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.233803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.233821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.247217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.247235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.154 [2024-11-26 19:11:07.261194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.154 [2024-11-26 19:11:07.261212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.275741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.275759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.290909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.290927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 16928.00 IOPS, 132.25 MiB/s [2024-11-26T18:11:07.527Z] [2024-11-26 19:11:07.304497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.304519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.318213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.318231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.331732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.331750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.345232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.345250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.358867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.358885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.372570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.372588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.386253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.386270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.399865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.399883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.413503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.413520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.427689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.413 [2024-11-26 19:11:07.427709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.413 [2024-11-26 19:11:07.441357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.414 [2024-11-26 19:11:07.441374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.414 [2024-11-26 19:11:07.455172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.414 [2024-11-26 19:11:07.455190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.414 [2024-11-26 19:11:07.468963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.414 [2024-11-26 19:11:07.468981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.414 [2024-11-26 19:11:07.483094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.414 [2024-11-26 19:11:07.483114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.414 [2024-11-26 19:11:07.494311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.414 [2024-11-26 19:11:07.494330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.414 [2024-11-26 19:11:07.508519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.414 [2024-11-26 19:11:07.508539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.414 [2024-11-26 19:11:07.522586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.414 [2024-11-26 19:11:07.522607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.533596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.533615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.547471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.547490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.561005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.561028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.574802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.574821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.588620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.588638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.602138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.602157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.616035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.616054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.629634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.629653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.643376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.643394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.656842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.656861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.670814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.670833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.684091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.684109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.698051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.698069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.711889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.711908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.725855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.725874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.734845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.734864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.749036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.749055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.762881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.762899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.673 [2024-11-26 19:11:07.776401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.673 [2024-11-26 19:11:07.776420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.790164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.790183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.803719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.803738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.817087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.817111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.830676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.830694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.839576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.839593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.853564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.853582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.867071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.867090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.881113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.881132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.895065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.895083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.908955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.908973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.922132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.922150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.935731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.935749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.949405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.949423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.962952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.962970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.976876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.976894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:07.990412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:07.990430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:08.003891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:08.003908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:08.017839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:08.017858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.932 [2024-11-26 19:11:08.031792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.932 [2024-11-26 19:11:08.031811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.045763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.045782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.059679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.059697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.073330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.073353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.087041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.087059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.100668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.100691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.114839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.114857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.128142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.128160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.142012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.142030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.155837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.155855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.169809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.169827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.183203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.183221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.197029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.197048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.210275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.210292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.223829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.223847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.237619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.237638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.251469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.251486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.265298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.265319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.279141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.279159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.190 [2024-11-26 19:11:08.293078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.190 [2024-11-26 19:11:08.293097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.448 16965.67 IOPS, 132.54 MiB/s [2024-11-26T18:11:08.562Z] [2024-11-26 19:11:08.307496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.448 [2024-11-26 19:11:08.307514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.448 [2024-11-26 19:11:08.322375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.448 [2024-11-26 19:11:08.322393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.336489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.336507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.350069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.350088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.363999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.364018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.377549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.377569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.391369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.391387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.405108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.405126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.419000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.419019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.432479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.432496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.441833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.441850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.455950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.455967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.469665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.469689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.483787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.483806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.497281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.497298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.511199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.511218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.524897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.524915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.538880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.538898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.449 [2024-11-26 19:11:08.552436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.449 [2024-11-26 19:11:08.552454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.566290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.566309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.579983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.580002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.593525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.593544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.602416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.602434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.616447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.616465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.630186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.630204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.643755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.643773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.657203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.657222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.670561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.670579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.684379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.684398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.697856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.697874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.711339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.711357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.724815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.724836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.738602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.738620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.752079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.752097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.766181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.766199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.779579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.779598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.793000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.793017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.707 [2024-11-26 19:11:08.806694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.707 [2024-11-26 19:11:08.806712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.820707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.820726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.834507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.834526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.848408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.848427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.861806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.861826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.875287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.875306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.889188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.889208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.903119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.903138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.916608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.916627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.930800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.930818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.941778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.941795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.956180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.956198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.970150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.970169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.984031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.984049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:08.997774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:08.997793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:09.011837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:09.011857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:09.025481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:09.025500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:09.039252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:09.039270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:09.052611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:09.052629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.965 [2024-11-26 19:11:09.066570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.965 [2024-11-26 19:11:09.066589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.080420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.080439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.094480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.094504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.105345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.105364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.119164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.119183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.128253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.128271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.142184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.142203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.155708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.155727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.169271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.169290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.183239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.183258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.196795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.196814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.210719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.210738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.223828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.223847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.238252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.238271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.248659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.248686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.262447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.262466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.276120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.276138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.289499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.289517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.303674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.303692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 16992.25 IOPS, 132.75 MiB/s [2024-11-26T18:11:09.336Z] [2024-11-26 19:11:09.317636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.317654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.222 [2024-11-26 19:11:09.331641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.222 [2024-11-26 19:11:09.331659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.345820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.345845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.359899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.359917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.373598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.373615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.387393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.387411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.401261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.401279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.415037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.415055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.429145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.429163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.442941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.442959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.456562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.456579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.470779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.470796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.484972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.484991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.499022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.499040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.512709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.512732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.526796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.526814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.540557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.540575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.554170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.554188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.568132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.568150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.480 [2024-11-26 19:11:09.582258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.480 [2024-11-26 19:11:09.582279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.596351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.596370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.611834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.611858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.625868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.625887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.639409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.639426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.653139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.653157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.666927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.666945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.681231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.681249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.694798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.694816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.708613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.708631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.722008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.722026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.735565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.735584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.749027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.749047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.762445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.762463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.776692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.776710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.790634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.790652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.804368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.804386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.818067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.818086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.831546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.831565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.739 [2024-11-26 19:11:09.845314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.739 [2024-11-26 19:11:09.845333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.859636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.859655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.870394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.870412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.880050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.880068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.894159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.894177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.907748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.907768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.921471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.921490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.934885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.934904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.949322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.949340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.960178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.960196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.974479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.974497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:09.988474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:09.988492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:10.002268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:10.002286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:10.017111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:10.017129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:10.031991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:10.032009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:10.046094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:10.046113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:10.060520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:10.060539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:10.074442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:10.074461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:10.088928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:10.089033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.998 [2024-11-26 19:11:10.100659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.998 [2024-11-26 19:11:10.100684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.115377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.115396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.129006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.129025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.142862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.142881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.157089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.157107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.168119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.168136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.182306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.182324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.196182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.196200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.210020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.210037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.223860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.223878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.237649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.237667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.251628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.251646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.265079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.265099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.279016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.279037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.292848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.292868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.306867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.306886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 16965.60 IOPS, 132.54 MiB/s 00:09:47.257 Latency(us) 00:09:47.257 [2024-11-26T18:11:10.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.257 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:47.257 Nvme1n1 : 5.01 16968.39 132.57 0.00 0.00 7536.87 3573.27 17351.44 00:09:47.257 [2024-11-26T18:11:10.371Z] =================================================================================================================== 00:09:47.257 [2024-11-26T18:11:10.371Z] Total : 16968.39 132.57 0.00 0.00 7536.87 3573.27 17351.44 00:09:47.257 [2024-11-26 19:11:10.314970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.314988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.326997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.327020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.339050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.339065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.351066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.351084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.257 [2024-11-26 19:11:10.363096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.257 [2024-11-26 19:11:10.363109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.515 [2024-11-26 19:11:10.375135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.515 [2024-11-26 19:11:10.375156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.515 [2024-11-26 19:11:10.387166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.515 [2024-11-26 19:11:10.387179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.515 [2024-11-26 19:11:10.399194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.516 [2024-11-26 19:11:10.399207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.516 [2024-11-26 19:11:10.411224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.516 [2024-11-26 19:11:10.411237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.516 [2024-11-26 19:11:10.423272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.516 [2024-11-26 19:11:10.423285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.516 [2024-11-26 19:11:10.435286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.516 [2024-11-26 19:11:10.435296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.516 [2024-11-26 19:11:10.447321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.516 [2024-11-26 19:11:10.447334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.516 [2024-11-26 19:11:10.459353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.516 [2024-11-26 19:11:10.459365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.516 [2024-11-26 19:11:10.471383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.516 [2024-11-26 19:11:10.471394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3620109) - No such process 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3620109 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.516 delay0 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.516 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:47.774 [2024-11-26 19:11:10.639113] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:54.338 Initializing NVMe Controllers 00:09:54.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:54.338 Initialization complete. Launching workers. 00:09:54.338 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 102 00:09:54.338 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 389, failed to submit 33 00:09:54.338 success 212, unsuccessful 177, failed 0 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.338 rmmod nvme_tcp 00:09:54.338 rmmod nvme_fabrics 00:09:54.338 rmmod nvme_keyring 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3618342 ']' 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3618342 00:09:54.338 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3618342 ']' 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3618342 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3618342 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3618342' 00:09:54.339 killing process with pid 3618342 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3618342 00:09:54.339 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3618342 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.339 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.244 00:09:56.244 real 0m31.318s 00:09:56.244 user 0m42.031s 00:09:56.244 sys 0m10.822s 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.244 ************************************ 00:09:56.244 END TEST nvmf_zcopy 00:09:56.244 ************************************ 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.244 ************************************ 00:09:56.244 START TEST nvmf_nmic 00:09:56.244 ************************************ 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.244 * Looking for test storage... 00:09:56.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.244 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.504 --rc genhtml_branch_coverage=1 00:09:56.504 --rc genhtml_function_coverage=1 00:09:56.504 --rc genhtml_legend=1 00:09:56.504 --rc geninfo_all_blocks=1 00:09:56.504 --rc geninfo_unexecuted_blocks=1 00:09:56.504 00:09:56.504 ' 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.504 --rc genhtml_branch_coverage=1 00:09:56.504 --rc genhtml_function_coverage=1 00:09:56.504 --rc genhtml_legend=1 00:09:56.504 --rc geninfo_all_blocks=1 00:09:56.504 --rc geninfo_unexecuted_blocks=1 00:09:56.504 00:09:56.504 ' 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.504 --rc genhtml_branch_coverage=1 00:09:56.504 --rc genhtml_function_coverage=1 00:09:56.504 --rc genhtml_legend=1 00:09:56.504 --rc geninfo_all_blocks=1 00:09:56.504 --rc geninfo_unexecuted_blocks=1 00:09:56.504 00:09:56.504 ' 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.504 --rc genhtml_branch_coverage=1 00:09:56.504 --rc genhtml_function_coverage=1 00:09:56.504 --rc genhtml_legend=1 00:09:56.504 --rc geninfo_all_blocks=1 00:09:56.504 --rc geninfo_unexecuted_blocks=1 00:09:56.504 00:09:56.504 ' 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.504 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:56.505 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.082 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:03.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:03.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:03.083 Found net devices under 0000:86:00.0: cvl_0_0 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:03.083 Found net devices under 0000:86:00.1: cvl_0_1 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.083 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:10:03.083 00:10:03.083 --- 10.0.0.2 ping statistics --- 00:10:03.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.083 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:10:03.083 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:10:03.084 00:10:03.084 --- 10.0.0.1 ping statistics --- 00:10:03.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.084 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3625572 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3625572 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3625572 ']' 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.084 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.084 [2024-11-26 19:11:25.337403] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:10:03.084 [2024-11-26 19:11:25.337459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.084 [2024-11-26 19:11:25.415305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.084 [2024-11-26 19:11:25.460015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.084 [2024-11-26 19:11:25.460049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.084 [2024-11-26 19:11:25.460056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.084 [2024-11-26 19:11:25.460062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.084 [2024-11-26 19:11:25.460067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.084 [2024-11-26 19:11:25.461583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.084 [2024-11-26 19:11:25.461687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.084 [2024-11-26 19:11:25.461714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.084 [2024-11-26 19:11:25.461715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.084 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.084 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:03.084 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.084 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.084 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.342 [2024-11-26 19:11:26.205858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.342 Malloc0 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.342 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 [2024-11-26 19:11:26.273890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:03.343 test case1: single bdev can't be used in multiple subsystems 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 [2024-11-26 19:11:26.301793] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:03.343 [2024-11-26 19:11:26.301813] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:03.343 [2024-11-26 19:11:26.301821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.343 request: 00:10:03.343 { 00:10:03.343 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:03.343 "namespace": { 00:10:03.343 "bdev_name": "Malloc0", 00:10:03.343 "no_auto_visible": false 00:10:03.343 }, 00:10:03.343 "method": "nvmf_subsystem_add_ns", 00:10:03.343 "req_id": 1 00:10:03.343 } 00:10:03.343 Got JSON-RPC error response 00:10:03.343 response: 00:10:03.343 { 00:10:03.343 "code": -32602, 00:10:03.343 "message": "Invalid parameters" 00:10:03.343 } 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:03.343 Adding namespace failed - expected result. 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:03.343 test case2: host connect to nvmf target in multiple paths 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 [2024-11-26 19:11:26.313924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.715 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:05.647 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:05.647 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:05.647 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.647 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:05.647 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:07.564 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:07.564 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:07.564 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.564 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:07.564 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.564 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:07.564 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.564 [global] 00:10:07.564 thread=1 00:10:07.564 invalidate=1 00:10:07.564 rw=write 00:10:07.564 time_based=1 00:10:07.564 runtime=1 00:10:07.564 ioengine=libaio 00:10:07.564 direct=1 00:10:07.564 bs=4096 00:10:07.564 iodepth=1 00:10:07.564 norandommap=0 00:10:07.564 numjobs=1 00:10:07.564 00:10:07.564 verify_dump=1 00:10:07.564 verify_backlog=512 00:10:07.564 verify_state_save=0 00:10:07.564 do_verify=1 00:10:07.564 verify=crc32c-intel 00:10:07.564 [job0] 00:10:07.564 filename=/dev/nvme0n1 00:10:07.821 Could not set queue depth (nvme0n1) 00:10:08.078 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.078 fio-3.35 00:10:08.078 Starting 1 thread 00:10:09.012 00:10:09.012 job0: (groupid=0, jobs=1): err= 0: pid=3626658: Tue Nov 26 19:11:32 2024 00:10:09.012 read: IOPS=22, BW=90.6KiB/s (92.7kB/s)(92.0KiB/1016msec) 00:10:09.012 slat (nsec): min=9982, max=27103, avg=20015.57, stdev=5500.28 00:10:09.012 clat (usec): min=354, max=41939, avg=39221.64, stdev=8476.03 00:10:09.012 lat (usec): min=381, max=41962, avg=39241.65, stdev=8474.52 00:10:09.012 clat percentiles (usec): 00:10:09.012 | 1.00th=[ 355], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:09.012 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:09.012 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:09.012 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:09.012 | 99.99th=[41681] 00:10:09.012 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:10:09.012 slat (usec): min=10, max=25857, avg=62.52, stdev=1142.21 00:10:09.012 clat (usec): min=120, max=330, avg=154.86, stdev=18.65 00:10:09.012 lat (usec): min=131, max=26074, avg=217.38, stdev=1145.14 00:10:09.012 clat percentiles (usec): 00:10:09.012 | 1.00th=[ 123], 5.00th=[ 126], 10.00th=[ 127], 20.00th=[ 135], 00:10:09.012 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:10:09.012 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 176], 00:10:09.012 | 99.00th=[ 190], 99.50th=[ 219], 99.90th=[ 330], 99.95th=[ 330], 00:10:09.012 | 99.99th=[ 330] 00:10:09.012 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:10:09.012 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:09.012 lat (usec) : 250=95.51%, 500=0.37% 00:10:09.012 lat (msec) : 50=4.11% 00:10:09.012 cpu : usr=0.69%, sys=0.69%, ctx=538, majf=0, minf=1 00:10:09.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.012 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.012 00:10:09.012 Run status group 0 (all jobs): 00:10:09.012 READ: bw=90.6KiB/s (92.7kB/s), 90.6KiB/s-90.6KiB/s (92.7kB/s-92.7kB/s), io=92.0KiB (94.2kB), run=1016-1016msec 00:10:09.012 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:10:09.012 00:10:09.012 Disk stats (read/write): 00:10:09.012 nvme0n1: ios=46/512, merge=0/0, ticks=1771/71, in_queue=1842, util=98.60% 00:10:09.012 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.269 rmmod nvme_tcp 00:10:09.269 rmmod nvme_fabrics 00:10:09.269 rmmod nvme_keyring 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3625572 ']' 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3625572 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3625572 ']' 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3625572 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.269 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3625572 00:10:09.528 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3625572' 00:10:09.529 killing process with pid 3625572 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3625572 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3625572 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.529 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.066 00:10:12.066 real 0m15.458s 00:10:12.066 user 0m35.805s 00:10:12.066 sys 0m5.187s 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.066 ************************************ 00:10:12.066 END TEST nvmf_nmic 00:10:12.066 ************************************ 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.066 ************************************ 00:10:12.066 START TEST nvmf_fio_target 00:10:12.066 ************************************ 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:12.066 * Looking for test storage... 00:10:12.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.066 --rc genhtml_branch_coverage=1 00:10:12.066 --rc genhtml_function_coverage=1 00:10:12.066 --rc genhtml_legend=1 00:10:12.066 --rc geninfo_all_blocks=1 00:10:12.066 --rc geninfo_unexecuted_blocks=1 00:10:12.066 00:10:12.066 ' 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.066 --rc genhtml_branch_coverage=1 00:10:12.066 --rc genhtml_function_coverage=1 00:10:12.066 --rc genhtml_legend=1 00:10:12.066 --rc geninfo_all_blocks=1 00:10:12.066 --rc geninfo_unexecuted_blocks=1 00:10:12.066 00:10:12.066 ' 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.066 --rc genhtml_branch_coverage=1 00:10:12.066 --rc genhtml_function_coverage=1 00:10:12.066 --rc genhtml_legend=1 00:10:12.066 --rc geninfo_all_blocks=1 00:10:12.066 --rc geninfo_unexecuted_blocks=1 00:10:12.066 00:10:12.066 ' 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.066 --rc genhtml_branch_coverage=1 00:10:12.066 --rc genhtml_function_coverage=1 00:10:12.066 --rc genhtml_legend=1 00:10:12.066 --rc geninfo_all_blocks=1 00:10:12.066 --rc geninfo_unexecuted_blocks=1 00:10:12.066 00:10:12.066 ' 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.066 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.067 19:11:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:18.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:18.656 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:18.656 Found net devices under 0000:86:00.0: cvl_0_0 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:18.656 Found net devices under 0000:86:00.1: cvl_0_1 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.656 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:10:18.657 00:10:18.657 --- 10.0.0.2 ping statistics --- 00:10:18.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.657 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:18.657 00:10:18.657 --- 10.0.0.1 ping statistics --- 00:10:18.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.657 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3630435 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3630435 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3630435 ']' 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.657 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.657 [2024-11-26 19:11:41.014484] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:10:18.657 [2024-11-26 19:11:41.014533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.657 [2024-11-26 19:11:41.094557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.657 [2024-11-26 19:11:41.137096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.657 [2024-11-26 19:11:41.137132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.657 [2024-11-26 19:11:41.137140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.657 [2024-11-26 19:11:41.137146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.657 [2024-11-26 19:11:41.137151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.657 [2024-11-26 19:11:41.138618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.657 [2024-11-26 19:11:41.138724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.657 [2024-11-26 19:11:41.138759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.657 [2024-11-26 19:11:41.138760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:18.657 [2024-11-26 19:11:41.449484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:18.657 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.915 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:18.915 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.173 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:19.173 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.431 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:19.431 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:19.431 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.690 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:19.690 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.948 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:19.948 19:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.205 19:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:20.205 19:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:20.463 19:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.721 19:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:20.721 19:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.721 19:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:20.721 19:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.980 19:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.239 [2024-11-26 19:11:44.145141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.239 19:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:21.497 19:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:21.497 19:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.874 19:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:22.874 19:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.874 19:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.874 19:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:22.874 19:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:22.874 19:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.768 19:11:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.768 19:11:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.768 19:11:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.768 19:11:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:24.768 19:11:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.768 19:11:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:24.768 19:11:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:24.768 [global] 00:10:24.768 thread=1 00:10:24.768 invalidate=1 00:10:24.768 rw=write 00:10:24.768 time_based=1 00:10:24.768 runtime=1 00:10:24.768 ioengine=libaio 00:10:24.768 direct=1 00:10:24.768 bs=4096 00:10:24.768 iodepth=1 00:10:24.768 norandommap=0 00:10:24.768 numjobs=1 00:10:24.768 00:10:24.768 verify_dump=1 00:10:24.768 verify_backlog=512 00:10:24.768 verify_state_save=0 00:10:24.768 do_verify=1 00:10:24.768 verify=crc32c-intel 00:10:24.768 [job0] 00:10:24.768 filename=/dev/nvme0n1 00:10:24.768 [job1] 00:10:24.768 filename=/dev/nvme0n2 00:10:24.768 [job2] 00:10:24.768 filename=/dev/nvme0n3 00:10:24.768 [job3] 00:10:24.768 filename=/dev/nvme0n4 00:10:24.768 Could not set queue depth (nvme0n1) 00:10:24.768 Could not set queue depth (nvme0n2) 00:10:24.768 Could not set queue depth (nvme0n3) 00:10:24.768 Could not set queue depth (nvme0n4) 00:10:25.025 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.025 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.025 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.025 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.025 fio-3.35 00:10:25.025 Starting 4 threads 00:10:26.391 00:10:26.391 job0: (groupid=0, jobs=1): err= 0: pid=3631791: Tue Nov 26 19:11:49 2024 00:10:26.391 read: IOPS=36, BW=146KiB/s (150kB/s)(148KiB/1013msec) 00:10:26.391 slat (nsec): min=8659, max=28540, avg=16726.19, stdev=6611.06 00:10:26.391 clat (usec): min=186, max=43895, avg=24429.40, stdev=20274.20 00:10:26.391 lat (usec): min=196, max=43924, avg=24446.12, stdev=20273.68 00:10:26.391 clat percentiles (usec): 00:10:26.391 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 206], 00:10:26.391 | 30.00th=[ 215], 40.00th=[ 400], 50.00th=[40633], 60.00th=[40633], 00:10:26.391 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:26.391 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:26.391 | 99.99th=[43779] 00:10:26.391 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:26.391 slat (nsec): min=9858, max=47257, avg=11429.12, stdev=2654.56 00:10:26.391 clat (usec): min=134, max=382, avg=197.25, stdev=34.93 00:10:26.391 lat (usec): min=145, max=430, avg=208.67, stdev=35.42 00:10:26.391 clat percentiles (usec): 00:10:26.391 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 167], 00:10:26.391 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 200], 00:10:26.391 | 70.00th=[ 219], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 245], 00:10:26.391 | 99.00th=[ 260], 99.50th=[ 322], 99.90th=[ 383], 99.95th=[ 383], 00:10:26.391 | 99.99th=[ 383] 00:10:26.391 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.391 lat (usec) : 250=94.35%, 500=1.64% 00:10:26.391 lat (msec) : 50=4.01% 00:10:26.391 cpu : usr=0.40%, sys=0.89%, ctx=549, majf=0, minf=1 00:10:26.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.392 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.392 job1: (groupid=0, jobs=1): err= 0: pid=3631792: Tue Nov 26 19:11:49 2024 00:10:26.392 read: IOPS=518, BW=2075KiB/s (2124kB/s)(2112KiB/1018msec) 00:10:26.392 slat (nsec): min=6088, max=23764, avg=8373.70, stdev=2483.11 00:10:26.392 clat (usec): min=162, max=41138, avg=1600.38, stdev=7401.14 00:10:26.392 lat (usec): min=169, max=41148, avg=1608.76, stdev=7402.40 00:10:26.392 clat percentiles (usec): 00:10:26.392 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:10:26.392 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:10:26.392 | 70.00th=[ 217], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 289], 00:10:26.392 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:26.392 | 99.99th=[41157] 00:10:26.392 write: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec); 0 zone resets 00:10:26.392 slat (nsec): min=5444, max=41635, avg=10385.89, stdev=2480.13 00:10:26.392 clat (usec): min=112, max=313, avg=150.23, stdev=20.34 00:10:26.392 lat (usec): min=118, max=353, avg=160.62, stdev=21.05 00:10:26.392 clat percentiles (usec): 00:10:26.392 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 135], 00:10:26.392 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:10:26.392 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 182], 00:10:26.392 | 99.00th=[ 210], 99.50th=[ 223], 99.90th=[ 314], 99.95th=[ 314], 00:10:26.392 | 99.99th=[ 314] 00:10:26.392 bw ( KiB/s): min= 8192, max= 8192, per=67.87%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.392 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.392 lat (usec) : 250=93.62%, 500=5.22% 00:10:26.392 lat (msec) : 50=1.16% 00:10:26.392 cpu : usr=1.47%, sys=1.47%, ctx=1552, majf=0, minf=1 00:10:26.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.392 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.392 job2: (groupid=0, jobs=1): err= 0: pid=3631793: Tue Nov 26 19:11:49 2024 00:10:26.392 read: IOPS=610, BW=2442KiB/s (2500kB/s)(2444KiB/1001msec) 00:10:26.392 slat (nsec): min=6724, max=35086, avg=8147.66, stdev=3032.03 00:10:26.392 clat (usec): min=173, max=41560, avg=1343.48, stdev=6731.40 00:10:26.392 lat (usec): min=180, max=41568, avg=1351.63, stdev=6732.17 00:10:26.392 clat percentiles (usec): 00:10:26.392 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:10:26.392 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:10:26.392 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 249], 95.00th=[ 260], 00:10:26.392 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:26.392 | 99.99th=[41681] 00:10:26.392 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:26.392 slat (nsec): min=10351, max=44526, avg=12431.31, stdev=1980.23 00:10:26.392 clat (usec): min=118, max=292, avg=153.85, stdev=23.56 00:10:26.392 lat (usec): min=129, max=337, avg=166.29, stdev=24.75 00:10:26.392 clat percentiles (usec): 00:10:26.392 | 1.00th=[ 123], 5.00th=[ 126], 10.00th=[ 127], 20.00th=[ 130], 00:10:26.392 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 155], 60.00th=[ 163], 00:10:26.392 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 192], 00:10:26.392 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 265], 99.95th=[ 293], 00:10:26.392 | 99.99th=[ 293] 00:10:26.392 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.392 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.392 lat (usec) : 250=96.51%, 500=2.45% 00:10:26.392 lat (msec) : 50=1.04% 00:10:26.392 cpu : usr=0.80%, sys=1.90%, ctx=1636, majf=0, minf=1 00:10:26.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.392 issued rwts: total=611,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.392 job3: (groupid=0, jobs=1): err= 0: pid=3631794: Tue Nov 26 19:11:49 2024 00:10:26.392 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:10:26.392 slat (nsec): min=9307, max=24042, avg=22139.50, stdev=2895.96 00:10:26.392 clat (usec): min=40869, max=41978, avg=41203.79, stdev=412.92 00:10:26.392 lat (usec): min=40892, max=42000, avg=41225.93, stdev=413.03 00:10:26.392 clat percentiles (usec): 00:10:26.392 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:26.392 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:26.392 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:26.392 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:26.392 | 99.99th=[42206] 00:10:26.392 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:26.392 slat (nsec): min=6791, max=33225, avg=11419.34, stdev=3145.23 00:10:26.392 clat (usec): min=138, max=387, avg=183.68, stdev=23.37 00:10:26.392 lat (usec): min=147, max=420, avg=195.09, stdev=23.71 00:10:26.392 clat percentiles (usec): 00:10:26.392 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:26.392 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:10:26.392 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 219], 00:10:26.392 | 99.00th=[ 243], 99.50th=[ 289], 99.90th=[ 388], 99.95th=[ 388], 00:10:26.392 | 99.99th=[ 388] 00:10:26.392 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.392 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.392 lat (usec) : 250=95.13%, 500=0.75% 00:10:26.392 lat (msec) : 50=4.12% 00:10:26.392 cpu : usr=0.30%, sys=0.50%, ctx=534, majf=0, minf=1 00:10:26.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.392 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.392 00:10:26.392 Run status group 0 (all jobs): 00:10:26.392 READ: bw=4707KiB/s (4820kB/s), 87.3KiB/s-2442KiB/s (89.4kB/s-2500kB/s), io=4792KiB (4907kB), run=1001-1018msec 00:10:26.392 WRITE: bw=11.8MiB/s (12.4MB/s), 2022KiB/s-4092KiB/s (2070kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1018msec 00:10:26.392 00:10:26.392 Disk stats (read/write): 00:10:26.392 nvme0n1: ios=82/512, merge=0/0, ticks=757/99, in_queue=856, util=86.17% 00:10:26.392 nvme0n2: ios=573/1024, merge=0/0, ticks=704/138, in_queue=842, util=90.61% 00:10:26.392 nvme0n3: ios=443/512, merge=0/0, ticks=1641/90, in_queue=1731, util=93.51% 00:10:26.392 nvme0n4: ios=75/512, merge=0/0, ticks=816/87, in_queue=903, util=95.25% 00:10:26.392 19:11:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:26.392 [global] 00:10:26.392 thread=1 00:10:26.392 invalidate=1 00:10:26.392 rw=randwrite 00:10:26.392 time_based=1 00:10:26.392 runtime=1 00:10:26.392 ioengine=libaio 00:10:26.392 direct=1 00:10:26.392 bs=4096 00:10:26.392 iodepth=1 00:10:26.392 norandommap=0 00:10:26.392 numjobs=1 00:10:26.392 00:10:26.392 verify_dump=1 00:10:26.392 verify_backlog=512 00:10:26.392 verify_state_save=0 00:10:26.392 do_verify=1 00:10:26.392 verify=crc32c-intel 00:10:26.392 [job0] 00:10:26.392 filename=/dev/nvme0n1 00:10:26.392 [job1] 00:10:26.392 filename=/dev/nvme0n2 00:10:26.392 [job2] 00:10:26.392 filename=/dev/nvme0n3 00:10:26.392 [job3] 00:10:26.392 filename=/dev/nvme0n4 00:10:26.392 Could not set queue depth (nvme0n1) 00:10:26.392 Could not set queue depth (nvme0n2) 00:10:26.392 Could not set queue depth (nvme0n3) 00:10:26.392 Could not set queue depth (nvme0n4) 00:10:26.650 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.650 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.650 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.650 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.650 fio-3.35 00:10:26.650 Starting 4 threads 00:10:28.024 00:10:28.024 job0: (groupid=0, jobs=1): err= 0: pid=3632167: Tue Nov 26 19:11:50 2024 00:10:28.024 read: IOPS=1508, BW=6036KiB/s (6181kB/s)(6084KiB/1008msec) 00:10:28.024 slat (nsec): min=6298, max=28033, avg=7271.19, stdev=1696.35 00:10:28.024 clat (usec): min=172, max=41031, avg=481.85, stdev=3142.85 00:10:28.024 lat (usec): min=180, max=41055, avg=489.12, stdev=3144.05 00:10:28.024 clat percentiles (usec): 00:10:28.024 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:10:28.024 | 30.00th=[ 208], 40.00th=[ 227], 50.00th=[ 237], 60.00th=[ 243], 00:10:28.024 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:10:28.024 | 99.00th=[ 318], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:28.024 | 99.99th=[41157] 00:10:28.024 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:10:28.024 slat (nsec): min=9276, max=36117, avg=10776.78, stdev=1431.99 00:10:28.024 clat (usec): min=104, max=415, avg=156.18, stdev=32.95 00:10:28.024 lat (usec): min=114, max=426, avg=166.96, stdev=33.11 00:10:28.024 clat percentiles (usec): 00:10:28.024 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:10:28.024 | 30.00th=[ 129], 40.00th=[ 137], 50.00th=[ 159], 60.00th=[ 169], 00:10:28.024 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 204], 00:10:28.024 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 351], 99.95th=[ 416], 00:10:28.024 | 99.99th=[ 416] 00:10:28.024 bw ( KiB/s): min= 1176, max=11112, per=51.75%, avg=6144.00, stdev=7025.81, samples=2 00:10:28.024 iops : min= 294, max= 2778, avg=1536.00, stdev=1756.45, samples=2 00:10:28.024 lat (usec) : 250=86.52%, 500=13.08% 00:10:28.024 lat (msec) : 2=0.03%, 4=0.03%, 20=0.03%, 50=0.29% 00:10:28.024 cpu : usr=1.79%, sys=2.48%, ctx=3060, majf=0, minf=1 00:10:28.024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.024 issued rwts: total=1521,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.024 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.024 job1: (groupid=0, jobs=1): err= 0: pid=3632176: Tue Nov 26 19:11:50 2024 00:10:28.024 read: IOPS=23, BW=92.8KiB/s (95.0kB/s)(96.0KiB/1035msec) 00:10:28.024 slat (nsec): min=9984, max=25121, avg=21973.50, stdev=3779.12 00:10:28.024 clat (usec): min=331, max=41047, avg=39254.21, stdev=8290.77 00:10:28.024 lat (usec): min=356, max=41069, avg=39276.18, stdev=8290.21 00:10:28.024 clat percentiles (usec): 00:10:28.024 | 1.00th=[ 330], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:28.024 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:28.024 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:28.024 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:28.024 | 99.99th=[41157] 00:10:28.024 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:10:28.024 slat (nsec): min=9844, max=40543, avg=11207.59, stdev=2210.56 00:10:28.024 clat (usec): min=135, max=491, avg=164.84, stdev=27.99 00:10:28.024 lat (usec): min=145, max=502, avg=176.05, stdev=28.76 00:10:28.024 clat percentiles (usec): 00:10:28.024 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:10:28.024 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:10:28.024 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 229], 00:10:28.024 | 99.00th=[ 273], 99.50th=[ 297], 99.90th=[ 494], 99.95th=[ 494], 00:10:28.024 | 99.99th=[ 494] 00:10:28.024 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:28.024 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:28.024 lat (usec) : 250=93.28%, 500=2.43% 00:10:28.024 lat (msec) : 50=4.29% 00:10:28.024 cpu : usr=0.77%, sys=0.58%, ctx=536, majf=0, minf=2 00:10:28.024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.024 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.024 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.024 job2: (groupid=0, jobs=1): err= 0: pid=3632201: Tue Nov 26 19:11:50 2024 00:10:28.024 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:10:28.024 slat (nsec): min=9829, max=25674, avg=22512.36, stdev=3341.11 00:10:28.024 clat (usec): min=40845, max=45137, avg=41218.27, stdev=899.76 00:10:28.024 lat (usec): min=40869, max=45153, avg=41240.78, stdev=898.09 00:10:28.024 clat percentiles (usec): 00:10:28.025 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:28.025 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:28.025 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:28.025 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:10:28.025 | 99.99th=[45351] 00:10:28.025 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:28.025 slat (nsec): min=10781, max=36626, avg=12018.51, stdev=2082.33 00:10:28.025 clat (usec): min=147, max=311, avg=173.21, stdev=14.96 00:10:28.025 lat (usec): min=159, max=347, avg=185.23, stdev=15.82 00:10:28.025 clat percentiles (usec): 00:10:28.025 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:10:28.025 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:28.025 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 198], 00:10:28.025 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 310], 99.95th=[ 310], 00:10:28.025 | 99.99th=[ 310] 00:10:28.025 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:28.025 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:28.025 lat (usec) : 250=95.69%, 500=0.19% 00:10:28.025 lat (msec) : 50=4.12% 00:10:28.025 cpu : usr=0.50%, sys=0.80%, ctx=535, majf=0, minf=1 00:10:28.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.025 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.025 job3: (groupid=0, jobs=1): err= 0: pid=3632213: Tue Nov 26 19:11:50 2024 00:10:28.025 read: IOPS=53, BW=214KiB/s (219kB/s)(216KiB/1011msec) 00:10:28.025 slat (nsec): min=8374, max=42007, avg=16007.83, stdev=7593.73 00:10:28.025 clat (usec): min=204, max=41442, avg=16817.39, stdev=20145.09 00:10:28.025 lat (usec): min=215, max=41452, avg=16833.40, stdev=20144.93 00:10:28.025 clat percentiles (usec): 00:10:28.025 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 237], 00:10:28.025 | 30.00th=[ 251], 40.00th=[ 277], 50.00th=[ 302], 60.00th=[40633], 00:10:28.025 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:28.025 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:28.025 | 99.99th=[41681] 00:10:28.025 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:10:28.025 slat (nsec): min=10611, max=35863, avg=11928.31, stdev=2130.20 00:10:28.025 clat (usec): min=133, max=340, avg=182.66, stdev=22.84 00:10:28.025 lat (usec): min=144, max=351, avg=194.59, stdev=23.15 00:10:28.025 clat percentiles (usec): 00:10:28.025 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 167], 00:10:28.025 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:10:28.025 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 229], 00:10:28.025 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 343], 99.95th=[ 343], 00:10:28.025 | 99.99th=[ 343] 00:10:28.025 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:28.025 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:28.025 lat (usec) : 250=91.52%, 500=4.59% 00:10:28.025 lat (msec) : 50=3.89% 00:10:28.025 cpu : usr=0.20%, sys=1.29%, ctx=570, majf=0, minf=1 00:10:28.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.025 issued rwts: total=54,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.025 00:10:28.025 Run status group 0 (all jobs): 00:10:28.025 READ: bw=6265KiB/s (6415kB/s), 87.6KiB/s-6036KiB/s (89.7kB/s-6181kB/s), io=6484KiB (6640kB), run=1005-1035msec 00:10:28.025 WRITE: bw=11.6MiB/s (12.2MB/s), 1979KiB/s-6095KiB/s (2026kB/s-6242kB/s), io=12.0MiB (12.6MB), run=1005-1035msec 00:10:28.025 00:10:28.025 Disk stats (read/write): 00:10:28.025 nvme0n1: ios=1542/1536, merge=0/0, ticks=1492/227, in_queue=1719, util=97.70% 00:10:28.025 nvme0n2: ios=18/512, merge=0/0, ticks=697/83, in_queue=780, util=83.04% 00:10:28.025 nvme0n3: ios=40/512, merge=0/0, ticks=1645/85, in_queue=1730, util=97.94% 00:10:28.025 nvme0n4: ios=85/512, merge=0/0, ticks=835/85, in_queue=920, util=97.24% 00:10:28.025 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:28.025 [global] 00:10:28.025 thread=1 00:10:28.025 invalidate=1 00:10:28.025 rw=write 00:10:28.025 time_based=1 00:10:28.025 runtime=1 00:10:28.025 ioengine=libaio 00:10:28.025 direct=1 00:10:28.025 bs=4096 00:10:28.025 iodepth=128 00:10:28.025 norandommap=0 00:10:28.025 numjobs=1 00:10:28.025 00:10:28.025 verify_dump=1 00:10:28.025 verify_backlog=512 00:10:28.025 verify_state_save=0 00:10:28.025 do_verify=1 00:10:28.025 verify=crc32c-intel 00:10:28.025 [job0] 00:10:28.025 filename=/dev/nvme0n1 00:10:28.025 [job1] 00:10:28.025 filename=/dev/nvme0n2 00:10:28.025 [job2] 00:10:28.025 filename=/dev/nvme0n3 00:10:28.025 [job3] 00:10:28.025 filename=/dev/nvme0n4 00:10:28.025 Could not set queue depth (nvme0n1) 00:10:28.025 Could not set queue depth (nvme0n2) 00:10:28.025 Could not set queue depth (nvme0n3) 00:10:28.025 Could not set queue depth (nvme0n4) 00:10:28.283 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.283 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.283 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.283 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.283 fio-3.35 00:10:28.283 Starting 4 threads 00:10:29.675 00:10:29.675 job0: (groupid=0, jobs=1): err= 0: pid=3632668: Tue Nov 26 19:11:52 2024 00:10:29.675 read: IOPS=4701, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1007msec) 00:10:29.675 slat (nsec): min=1090, max=15763k, avg=99660.87, stdev=762163.39 00:10:29.675 clat (usec): min=754, max=61019, avg=12581.69, stdev=8770.99 00:10:29.675 lat (usec): min=763, max=61023, avg=12681.35, stdev=8854.51 00:10:29.675 clat percentiles (usec): 00:10:29.675 | 1.00th=[ 3294], 5.00th=[ 4080], 10.00th=[ 5932], 20.00th=[ 7898], 00:10:29.675 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10552], 60.00th=[11076], 00:10:29.675 | 70.00th=[11600], 80.00th=[14222], 90.00th=[20841], 95.00th=[28705], 00:10:29.675 | 99.00th=[57410], 99.50th=[58983], 99.90th=[61080], 99.95th=[61080], 00:10:29.675 | 99.99th=[61080] 00:10:29.675 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:29.675 slat (usec): min=2, max=13838, avg=86.96, stdev=647.38 00:10:29.675 clat (usec): min=497, max=61024, avg=13221.82, stdev=8133.67 00:10:29.675 lat (usec): min=508, max=61027, avg=13308.79, stdev=8180.01 00:10:29.675 clat percentiles (usec): 00:10:29.675 | 1.00th=[ 2376], 5.00th=[ 4686], 10.00th=[ 6587], 20.00th=[ 7767], 00:10:29.675 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[11338], 60.00th=[13042], 00:10:29.675 | 70.00th=[14615], 80.00th=[17957], 90.00th=[19530], 95.00th=[27657], 00:10:29.675 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[59507], 00:10:29.675 | 99.99th=[61080] 00:10:29.675 bw ( KiB/s): min=15624, max=25320, per=27.60%, avg=20472.00, stdev=6856.11, samples=2 00:10:29.675 iops : min= 3906, max= 6330, avg=5118.00, stdev=1714.03, samples=2 00:10:29.675 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.40% 00:10:29.675 lat (msec) : 2=0.03%, 4=4.11%, 10=34.43%, 20=51.54%, 50=8.00% 00:10:29.675 lat (msec) : 100=1.43% 00:10:29.675 cpu : usr=3.48%, sys=5.37%, ctx=364, majf=0, minf=1 00:10:29.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:29.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.675 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.675 job1: (groupid=0, jobs=1): err= 0: pid=3632679: Tue Nov 26 19:11:52 2024 00:10:29.675 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:10:29.675 slat (nsec): min=1108, max=25577k, avg=103064.93, stdev=877772.95 00:10:29.675 clat (usec): min=3346, max=42606, avg=14733.05, stdev=6646.70 00:10:29.675 lat (usec): min=3359, max=42628, avg=14836.11, stdev=6699.44 00:10:29.675 clat percentiles (usec): 00:10:29.675 | 1.00th=[ 4948], 5.00th=[ 7308], 10.00th=[ 8225], 20.00th=[10159], 00:10:29.675 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12256], 60.00th=[14222], 00:10:29.675 | 70.00th=[16319], 80.00th=[20317], 90.00th=[25297], 95.00th=[27657], 00:10:29.675 | 99.00th=[36439], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:10:29.675 | 99.99th=[42730] 00:10:29.675 write: IOPS=4199, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1011msec); 0 zone resets 00:10:29.675 slat (usec): min=2, max=14893, avg=104.45, stdev=654.34 00:10:29.675 clat (usec): min=193, max=52364, avg=16029.42, stdev=10618.65 00:10:29.675 lat (usec): min=331, max=52373, avg=16133.87, stdev=10677.61 00:10:29.675 clat percentiles (usec): 00:10:29.675 | 1.00th=[ 758], 5.00th=[ 3032], 10.00th=[ 3916], 20.00th=[ 7701], 00:10:29.675 | 30.00th=[ 9372], 40.00th=[11076], 50.00th=[15926], 60.00th=[18220], 00:10:29.675 | 70.00th=[19268], 80.00th=[21627], 90.00th=[28967], 95.00th=[39584], 00:10:29.675 | 99.00th=[50070], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:10:29.675 | 99.99th=[52167] 00:10:29.675 bw ( KiB/s): min=13768, max=19176, per=22.21%, avg=16472.00, stdev=3824.03, samples=2 00:10:29.675 iops : min= 3442, max= 4794, avg=4118.00, stdev=956.01, samples=2 00:10:29.675 lat (usec) : 250=0.01%, 500=0.10%, 750=0.35%, 1000=0.73% 00:10:29.675 lat (msec) : 2=0.44%, 4=3.50%, 10=21.83%, 20=51.17%, 50=21.21% 00:10:29.675 lat (msec) : 100=0.66% 00:10:29.675 cpu : usr=2.67%, sys=5.64%, ctx=414, majf=0, minf=2 00:10:29.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:29.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.675 issued rwts: total=4096,4246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.675 job2: (groupid=0, jobs=1): err= 0: pid=3632697: Tue Nov 26 19:11:52 2024 00:10:29.675 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:29.675 slat (nsec): min=1557, max=22568k, avg=105226.61, stdev=703774.94 00:10:29.675 clat (usec): min=8311, max=56752, avg=14017.44, stdev=7335.66 00:10:29.675 lat (usec): min=8318, max=56780, avg=14122.66, stdev=7383.47 00:10:29.675 clat percentiles (usec): 00:10:29.675 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11076], 20.00th=[11207], 00:10:29.675 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11994], 00:10:29.675 | 70.00th=[12387], 80.00th=[13829], 90.00th=[17695], 95.00th=[31327], 00:10:29.675 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[55837], 00:10:29.675 | 99.99th=[56886] 00:10:29.675 write: IOPS=4162, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1002msec); 0 zone resets 00:10:29.675 slat (usec): min=2, max=22455, avg=129.18, stdev=855.19 00:10:29.675 clat (usec): min=523, max=73000, avg=16076.91, stdev=9508.43 00:10:29.675 lat (usec): min=4002, max=73036, avg=16206.08, stdev=9597.58 00:10:29.675 clat percentiles (usec): 00:10:29.675 | 1.00th=[ 4359], 5.00th=[10945], 10.00th=[11207], 20.00th=[11469], 00:10:29.675 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:10:29.675 | 70.00th=[13173], 80.00th=[21103], 90.00th=[28443], 95.00th=[40109], 00:10:29.675 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:10:29.675 | 99.99th=[72877] 00:10:29.675 bw ( KiB/s): min=12288, max=20896, per=22.37%, avg=16592.00, stdev=6086.78, samples=2 00:10:29.675 iops : min= 3072, max= 5226, avg=4149.00, stdev=1523.11, samples=2 00:10:29.675 lat (usec) : 750=0.01% 00:10:29.675 lat (msec) : 10=3.42%, 20=81.48%, 50=13.02%, 100=2.07% 00:10:29.675 cpu : usr=4.80%, sys=5.39%, ctx=359, majf=0, minf=1 00:10:29.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:29.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.675 issued rwts: total=4096,4171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.676 job3: (groupid=0, jobs=1): err= 0: pid=3632703: Tue Nov 26 19:11:52 2024 00:10:29.676 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:29.676 slat (nsec): min=1381, max=10309k, avg=98166.91, stdev=564198.84 00:10:29.676 clat (usec): min=5858, max=36162, avg=12446.82, stdev=3549.07 00:10:29.676 lat (usec): min=5887, max=36205, avg=12544.99, stdev=3598.64 00:10:29.676 clat percentiles (usec): 00:10:29.676 | 1.00th=[ 7898], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11076], 00:10:29.676 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:10:29.676 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13566], 95.00th=[20055], 00:10:29.676 | 99.00th=[29230], 99.50th=[29230], 99.90th=[31851], 99.95th=[33817], 00:10:29.676 | 99.99th=[35914] 00:10:29.676 write: IOPS=5194, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1003msec); 0 zone resets 00:10:29.676 slat (usec): min=2, max=8722, avg=89.22, stdev=460.56 00:10:29.676 clat (usec): min=2569, max=26249, avg=12165.16, stdev=1942.89 00:10:29.676 lat (usec): min=3223, max=26260, avg=12254.38, stdev=1986.82 00:10:29.676 clat percentiles (usec): 00:10:29.676 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11207], 00:10:29.676 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:10:29.676 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:10:29.676 | 99.00th=[18482], 99.50th=[22152], 99.90th=[26346], 99.95th=[26346], 00:10:29.676 | 99.99th=[26346] 00:10:29.676 bw ( KiB/s): min=20480, max=20480, per=27.61%, avg=20480.00, stdev= 0.00, samples=2 00:10:29.676 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:29.676 lat (msec) : 4=0.13%, 10=7.00%, 20=90.06%, 50=2.82% 00:10:29.676 cpu : usr=4.19%, sys=6.69%, ctx=547, majf=0, minf=1 00:10:29.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:29.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.676 issued rwts: total=5120,5210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.676 00:10:29.676 Run status group 0 (all jobs): 00:10:29.676 READ: bw=69.7MiB/s (73.1MB/s), 15.8MiB/s-19.9MiB/s (16.6MB/s-20.9MB/s), io=70.5MiB (73.9MB), run=1002-1011msec 00:10:29.676 WRITE: bw=72.4MiB/s (76.0MB/s), 16.3MiB/s-20.3MiB/s (17.1MB/s-21.3MB/s), io=73.2MiB (76.8MB), run=1002-1011msec 00:10:29.676 00:10:29.676 Disk stats (read/write): 00:10:29.676 nvme0n1: ios=4148/4608, merge=0/0, ticks=41574/53338, in_queue=94912, util=93.79% 00:10:29.676 nvme0n2: ios=3634/3615, merge=0/0, ticks=48817/52123, in_queue=100940, util=94.82% 00:10:29.676 nvme0n3: ios=3090/3577, merge=0/0, ticks=15739/19607, in_queue=35346, util=97.71% 00:10:29.676 nvme0n4: ios=4140/4597, merge=0/0, ticks=18857/20288, in_queue=39145, util=98.64% 00:10:29.676 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:29.676 [global] 00:10:29.676 thread=1 00:10:29.676 invalidate=1 00:10:29.676 rw=randwrite 00:10:29.676 time_based=1 00:10:29.676 runtime=1 00:10:29.676 ioengine=libaio 00:10:29.676 direct=1 00:10:29.676 bs=4096 00:10:29.676 iodepth=128 00:10:29.676 norandommap=0 00:10:29.676 numjobs=1 00:10:29.676 00:10:29.676 verify_dump=1 00:10:29.676 verify_backlog=512 00:10:29.676 verify_state_save=0 00:10:29.676 do_verify=1 00:10:29.676 verify=crc32c-intel 00:10:29.676 [job0] 00:10:29.676 filename=/dev/nvme0n1 00:10:29.676 [job1] 00:10:29.676 filename=/dev/nvme0n2 00:10:29.676 [job2] 00:10:29.676 filename=/dev/nvme0n3 00:10:29.676 [job3] 00:10:29.676 filename=/dev/nvme0n4 00:10:29.676 Could not set queue depth (nvme0n1) 00:10:29.676 Could not set queue depth (nvme0n2) 00:10:29.676 Could not set queue depth (nvme0n3) 00:10:29.676 Could not set queue depth (nvme0n4) 00:10:29.935 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.935 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.935 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.935 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.935 fio-3.35 00:10:29.935 Starting 4 threads 00:10:31.302 00:10:31.302 job0: (groupid=0, jobs=1): err= 0: pid=3633179: Tue Nov 26 19:11:54 2024 00:10:31.302 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:10:31.302 slat (nsec): min=1084, max=12728k, avg=90891.82, stdev=661520.06 00:10:31.303 clat (msec): min=3, max=106, avg=13.11, stdev= 8.21 00:10:31.303 lat (msec): min=3, max=106, avg=13.20, stdev= 8.25 00:10:31.303 clat percentiles (msec): 00:10:31.303 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:10:31.303 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:10:31.303 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 18], 95.00th=[ 24], 00:10:31.303 | 99.00th=[ 34], 99.50th=[ 104], 99.90th=[ 107], 99.95th=[ 107], 00:10:31.303 | 99.99th=[ 107] 00:10:31.303 write: IOPS=4191, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1011msec); 0 zone resets 00:10:31.303 slat (nsec): min=1869, max=10052k, avg=113106.46, stdev=596077.76 00:10:31.303 clat (usec): min=2214, max=81250, avg=17603.88, stdev=12303.46 00:10:31.303 lat (usec): min=2254, max=81259, avg=17716.98, stdev=12386.73 00:10:31.303 clat percentiles (usec): 00:10:31.303 | 1.00th=[ 3326], 5.00th=[ 6063], 10.00th=[ 8356], 20.00th=[ 9241], 00:10:31.303 | 30.00th=[10159], 40.00th=[11863], 50.00th=[12649], 60.00th=[17171], 00:10:31.303 | 70.00th=[21365], 80.00th=[22938], 90.00th=[30278], 95.00th=[39584], 00:10:31.303 | 99.00th=[73925], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:10:31.303 | 99.99th=[81265] 00:10:31.303 bw ( KiB/s): min=12592, max=20288, per=24.10%, avg=16440.00, stdev=5441.89, samples=2 00:10:31.303 iops : min= 3148, max= 5072, avg=4110.00, stdev=1360.47, samples=2 00:10:31.303 lat (msec) : 4=1.03%, 10=29.37%, 20=47.14%, 50=20.67%, 100=1.52% 00:10:31.303 lat (msec) : 250=0.25% 00:10:31.303 cpu : usr=2.97%, sys=4.55%, ctx=462, majf=0, minf=1 00:10:31.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:31.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.303 issued rwts: total=4096,4238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.303 job1: (groupid=0, jobs=1): err= 0: pid=3633195: Tue Nov 26 19:11:54 2024 00:10:31.303 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:10:31.303 slat (nsec): min=1030, max=24148k, avg=82896.94, stdev=717173.69 00:10:31.303 clat (usec): min=718, max=86918, avg=11601.72, stdev=5578.44 00:10:31.303 lat (usec): min=725, max=86927, avg=11684.62, stdev=5625.86 00:10:31.303 clat percentiles (usec): 00:10:31.303 | 1.00th=[ 1270], 5.00th=[ 5473], 10.00th=[ 7373], 20.00th=[ 9110], 00:10:31.303 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:10:31.303 | 70.00th=[12125], 80.00th=[13829], 90.00th=[17171], 95.00th=[21627], 00:10:31.303 | 99.00th=[35914], 99.50th=[35914], 99.90th=[84411], 99.95th=[84411], 00:10:31.303 | 99.99th=[86508] 00:10:31.303 write: IOPS=6139, BW=24.0MiB/s (25.1MB/s)(24.2MiB/1010msec); 0 zone resets 00:10:31.303 slat (nsec): min=1943, max=8880.4k, avg=59679.01, stdev=444640.66 00:10:31.303 clat (usec): min=301, max=85227, avg=10154.39, stdev=11427.15 00:10:31.303 lat (usec): min=309, max=85237, avg=10214.07, stdev=11471.10 00:10:31.303 clat percentiles (usec): 00:10:31.303 | 1.00th=[ 644], 5.00th=[ 988], 10.00th=[ 2180], 20.00th=[ 5932], 00:10:31.303 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9503], 00:10:31.303 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[11469], 95.00th=[25297], 00:10:31.303 | 99.00th=[79168], 99.50th=[81265], 99.90th=[85459], 99.95th=[85459], 00:10:31.303 | 99.99th=[85459] 00:10:31.303 bw ( KiB/s): min=24576, max=24576, per=36.03%, avg=24576.00, stdev= 0.00, samples=2 00:10:31.303 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:31.303 lat (usec) : 500=0.23%, 750=2.15%, 1000=0.45% 00:10:31.303 lat (msec) : 2=3.05%, 4=4.12%, 10=51.90%, 20=32.07%, 50=4.84% 00:10:31.303 lat (msec) : 100=1.18% 00:10:31.303 cpu : usr=5.15%, sys=5.15%, ctx=547, majf=0, minf=2 00:10:31.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:31.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.303 issued rwts: total=5632,6201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.303 job2: (groupid=0, jobs=1): err= 0: pid=3633198: Tue Nov 26 19:11:54 2024 00:10:31.303 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:10:31.303 slat (nsec): min=1483, max=35533k, avg=149965.42, stdev=1236223.31 00:10:31.303 clat (usec): min=6850, max=85938, avg=18128.50, stdev=13896.81 00:10:31.303 lat (usec): min=6856, max=85963, avg=18278.47, stdev=14051.65 00:10:31.303 clat percentiles (usec): 00:10:31.303 | 1.00th=[ 7635], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10814], 00:10:31.303 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:10:31.303 | 70.00th=[13304], 80.00th=[32375], 90.00th=[39060], 95.00th=[53216], 00:10:31.303 | 99.00th=[57934], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:10:31.303 | 99.99th=[85459] 00:10:31.303 write: IOPS=3787, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1007msec); 0 zone resets 00:10:31.303 slat (usec): min=2, max=11093, avg=115.20, stdev=626.34 00:10:31.303 clat (usec): min=5113, max=81418, avg=16371.49, stdev=13744.47 00:10:31.303 lat (usec): min=6339, max=81428, avg=16486.69, stdev=13818.12 00:10:31.303 clat percentiles (usec): 00:10:31.303 | 1.00th=[ 7373], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[10814], 00:10:31.303 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:10:31.303 | 70.00th=[11863], 80.00th=[15139], 90.00th=[28443], 95.00th=[54264], 00:10:31.303 | 99.00th=[78119], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:10:31.303 | 99.99th=[81265] 00:10:31.303 bw ( KiB/s): min=13104, max=16384, per=21.62%, avg=14744.00, stdev=2319.31, samples=2 00:10:31.303 iops : min= 3276, max= 4096, avg=3686.00, stdev=579.83, samples=2 00:10:31.303 lat (msec) : 10=10.81%, 20=69.74%, 50=12.72%, 100=6.73% 00:10:31.303 cpu : usr=3.38%, sys=4.57%, ctx=391, majf=0, minf=1 00:10:31.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:31.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.303 issued rwts: total=3584,3814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.303 job3: (groupid=0, jobs=1): err= 0: pid=3633199: Tue Nov 26 19:11:54 2024 00:10:31.303 read: IOPS=3177, BW=12.4MiB/s (13.0MB/s)(13.0MiB/1046msec) 00:10:31.303 slat (nsec): min=1116, max=22216k, avg=127272.21, stdev=1018207.22 00:10:31.303 clat (usec): min=4588, max=60973, avg=17637.52, stdev=11302.61 00:10:31.303 lat (usec): min=4592, max=73720, avg=17764.80, stdev=11384.50 00:10:31.303 clat percentiles (usec): 00:10:31.303 | 1.00th=[ 5407], 5.00th=[ 8356], 10.00th=[ 9241], 20.00th=[ 9896], 00:10:31.303 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12649], 60.00th=[14877], 00:10:31.303 | 70.00th=[16909], 80.00th=[25035], 90.00th=[32375], 95.00th=[43779], 00:10:31.303 | 99.00th=[60556], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:10:31.303 | 99.99th=[61080] 00:10:31.303 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1046msec); 0 zone resets 00:10:31.303 slat (nsec): min=1832, max=22170k, avg=144784.45, stdev=947524.56 00:10:31.303 clat (usec): min=715, max=101731, avg=20298.80, stdev=19027.31 00:10:31.303 lat (usec): min=723, max=101739, avg=20443.59, stdev=19150.07 00:10:31.303 clat percentiles (msec): 00:10:31.303 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 9], 00:10:31.303 | 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 18], 00:10:31.303 | 70.00th=[ 22], 80.00th=[ 25], 90.00th=[ 42], 95.00th=[ 60], 00:10:31.303 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 103], 99.95th=[ 103], 00:10:31.303 | 99.99th=[ 103] 00:10:31.303 bw ( KiB/s): min=11280, max=17392, per=21.02%, avg=14336.00, stdev=4321.84, samples=2 00:10:31.303 iops : min= 2820, max= 4348, avg=3584.00, stdev=1080.46, samples=2 00:10:31.303 lat (usec) : 750=0.09% 00:10:31.303 lat (msec) : 2=0.29%, 4=2.07%, 10=25.25%, 20=43.86%, 50=23.51% 00:10:31.303 lat (msec) : 100=4.43%, 250=0.51% 00:10:31.303 cpu : usr=2.30%, sys=3.54%, ctx=330, majf=0, minf=1 00:10:31.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:31.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.303 issued rwts: total=3324,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.303 00:10:31.303 Run status group 0 (all jobs): 00:10:31.303 READ: bw=62.1MiB/s (65.1MB/s), 12.4MiB/s-21.8MiB/s (13.0MB/s-22.8MB/s), io=65.0MiB (68.1MB), run=1007-1046msec 00:10:31.303 WRITE: bw=66.6MiB/s (69.8MB/s), 13.4MiB/s-24.0MiB/s (14.0MB/s-25.1MB/s), io=69.7MiB (73.1MB), run=1007-1046msec 00:10:31.303 00:10:31.303 Disk stats (read/write): 00:10:31.303 nvme0n1: ios=3487/3584, merge=0/0, ticks=40822/58253, in_queue=99075, util=99.20% 00:10:31.303 nvme0n2: ios=4621/5551, merge=0/0, ticks=41892/48523, in_queue=90415, util=86.90% 00:10:31.303 nvme0n3: ios=2667/3072, merge=0/0, ticks=27714/24780, in_queue=52494, util=88.97% 00:10:31.303 nvme0n4: ios=3057/3072, merge=0/0, ticks=26649/28468, in_queue=55117, util=98.74% 00:10:31.303 19:11:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:31.303 19:11:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3633339 00:10:31.303 19:11:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:31.303 19:11:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:31.303 [global] 00:10:31.303 thread=1 00:10:31.303 invalidate=1 00:10:31.303 rw=read 00:10:31.303 time_based=1 00:10:31.303 runtime=10 00:10:31.303 ioengine=libaio 00:10:31.303 direct=1 00:10:31.303 bs=4096 00:10:31.303 iodepth=1 00:10:31.303 norandommap=1 00:10:31.303 numjobs=1 00:10:31.303 00:10:31.303 [job0] 00:10:31.303 filename=/dev/nvme0n1 00:10:31.303 [job1] 00:10:31.303 filename=/dev/nvme0n2 00:10:31.303 [job2] 00:10:31.303 filename=/dev/nvme0n3 00:10:31.303 [job3] 00:10:31.303 filename=/dev/nvme0n4 00:10:31.303 Could not set queue depth (nvme0n1) 00:10:31.303 Could not set queue depth (nvme0n2) 00:10:31.303 Could not set queue depth (nvme0n3) 00:10:31.303 Could not set queue depth (nvme0n4) 00:10:31.560 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.560 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.560 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.560 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.560 fio-3.35 00:10:31.560 Starting 4 threads 00:10:34.082 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:34.338 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:34.338 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35119104, buflen=4096 00:10:34.338 fio: pid=3633580, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.595 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.595 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:34.595 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=712704, buflen=4096 00:10:34.595 fio: pid=3633579, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.595 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44990464, buflen=4096 00:10:34.595 fio: pid=3633577, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.852 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.852 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:34.852 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=26394624, buflen=4096 00:10:34.852 fio: pid=3633578, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.852 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.852 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:35.111 00:10:35.111 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3633577: Tue Nov 26 19:11:57 2024 00:10:35.111 read: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(42.9MiB/3081msec) 00:10:35.111 slat (usec): min=6, max=11621, avg= 8.61, stdev=123.63 00:10:35.111 clat (usec): min=158, max=41523, avg=269.98, stdev=1508.20 00:10:35.111 lat (usec): min=165, max=41530, avg=278.59, stdev=1513.82 00:10:35.111 clat percentiles (usec): 00:10:35.111 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:10:35.111 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:10:35.111 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 260], 00:10:35.111 | 99.00th=[ 310], 99.50th=[ 392], 99.90th=[41157], 99.95th=[41157], 00:10:35.111 | 99.99th=[41157] 00:10:35.111 bw ( KiB/s): min= 1836, max=18472, per=45.05%, avg=14267.67, stdev=6356.51, samples=6 00:10:35.111 iops : min= 459, max= 4618, avg=3566.83, stdev=1589.09, samples=6 00:10:35.111 lat (usec) : 250=93.28%, 500=6.56%, 750=0.01% 00:10:35.111 lat (msec) : 50=0.14% 00:10:35.111 cpu : usr=1.10%, sys=2.89%, ctx=10988, majf=0, minf=1 00:10:35.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.111 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.111 issued rwts: total=10985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.111 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3633578: Tue Nov 26 19:11:57 2024 00:10:35.111 read: IOPS=1949, BW=7797KiB/s (7984kB/s)(25.2MiB/3306msec) 00:10:35.111 slat (usec): min=6, max=11623, avg=13.69, stdev=249.53 00:10:35.111 clat (usec): min=154, max=42004, avg=493.36, stdev=3400.58 00:10:35.111 lat (usec): min=161, max=42028, avg=507.05, stdev=3410.64 00:10:35.111 clat percentiles (usec): 00:10:35.111 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:10:35.111 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:10:35.111 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 255], 00:10:35.111 | 99.00th=[ 306], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:35.111 | 99.99th=[42206] 00:10:35.111 bw ( KiB/s): min= 263, max=18568, per=22.20%, avg=7030.67, stdev=8893.32, samples=6 00:10:35.111 iops : min= 65, max= 4642, avg=1757.50, stdev=2223.38, samples=6 00:10:35.111 lat (usec) : 250=93.76%, 500=5.48%, 750=0.05% 00:10:35.111 lat (msec) : 50=0.70% 00:10:35.111 cpu : usr=0.45%, sys=1.88%, ctx=6450, majf=0, minf=1 00:10:35.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.111 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.111 issued rwts: total=6445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.111 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3633579: Tue Nov 26 19:11:57 2024 00:10:35.111 read: IOPS=59, BW=238KiB/s (243kB/s)(696KiB/2929msec) 00:10:35.111 slat (nsec): min=8154, max=29605, avg=12113.03, stdev=5254.28 00:10:35.111 clat (usec): min=290, max=42030, avg=16692.37, stdev=20039.00 00:10:35.111 lat (usec): min=299, max=42040, avg=16704.48, stdev=20041.90 00:10:35.111 clat percentiles (usec): 00:10:35.111 | 1.00th=[ 293], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 297], 00:10:35.111 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[36963], 00:10:35.111 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:35.111 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:35.111 | 99.99th=[42206] 00:10:35.111 bw ( KiB/s): min= 96, max= 408, per=0.55%, avg=174.20, stdev=134.16, samples=5 00:10:35.111 iops : min= 24, max= 102, avg=43.40, stdev=33.64, samples=5 00:10:35.111 lat (usec) : 500=59.43% 00:10:35.111 lat (msec) : 50=40.00% 00:10:35.111 cpu : usr=0.03%, sys=0.07%, ctx=176, majf=0, minf=2 00:10:35.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.111 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.111 issued rwts: total=175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.111 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3633580: Tue Nov 26 19:11:57 2024 00:10:35.111 read: IOPS=3210, BW=12.5MiB/s (13.1MB/s)(33.5MiB/2671msec) 00:10:35.111 slat (nsec): min=6442, max=48879, avg=7360.71, stdev=1273.39 00:10:35.111 clat (usec): min=163, max=41042, avg=300.56, stdev=1966.58 00:10:35.111 lat (usec): min=170, max=41065, avg=307.92, stdev=1967.32 00:10:35.111 clat percentiles (usec): 00:10:35.111 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:10:35.111 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:10:35.111 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:10:35.111 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[41157], 99.95th=[41157], 00:10:35.111 | 99.99th=[41157] 00:10:35.111 bw ( KiB/s): min= 103, max=19056, per=39.84%, avg=12617.40, stdev=8839.37, samples=5 00:10:35.111 iops : min= 25, max= 4764, avg=3154.20, stdev=2210.11, samples=5 00:10:35.111 lat (usec) : 250=97.63%, 500=2.12% 00:10:35.111 lat (msec) : 50=0.23% 00:10:35.111 cpu : usr=0.79%, sys=2.92%, ctx=8575, majf=0, minf=2 00:10:35.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.111 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.111 issued rwts: total=8575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.111 00:10:35.111 Run status group 0 (all jobs): 00:10:35.111 READ: bw=30.9MiB/s (32.4MB/s), 238KiB/s-13.9MiB/s (243kB/s-14.6MB/s), io=102MiB (107MB), run=2671-3306msec 00:10:35.111 00:10:35.111 Disk stats (read/write): 00:10:35.111 nvme0n1: ios=10924/0, merge=0/0, ticks=2903/0, in_queue=2903, util=93.75% 00:10:35.111 nvme0n2: ios=5518/0, merge=0/0, ticks=2968/0, in_queue=2968, util=94.35% 00:10:35.111 nvme0n3: ios=206/0, merge=0/0, ticks=3495/0, in_queue=3495, util=99.66% 00:10:35.111 nvme0n4: ios=8140/0, merge=0/0, ticks=2448/0, in_queue=2448, util=96.35% 00:10:35.111 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.111 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:35.368 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.368 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:35.625 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.625 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:35.882 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.882 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:35.882 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:35.882 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3633339 00:10:35.882 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:35.882 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:36.139 nvmf hotplug test: fio failed as expected 00:10:36.139 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.396 rmmod nvme_tcp 00:10:36.396 rmmod nvme_fabrics 00:10:36.396 rmmod nvme_keyring 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3630435 ']' 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3630435 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3630435 ']' 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3630435 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3630435 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3630435' 00:10:36.396 killing process with pid 3630435 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3630435 00:10:36.396 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3630435 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.654 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.560 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.560 00:10:38.560 real 0m26.924s 00:10:38.560 user 1m47.429s 00:10:38.560 sys 0m8.427s 00:10:38.560 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.560 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.560 ************************************ 00:10:38.560 END TEST nvmf_fio_target 00:10:38.560 ************************************ 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.819 ************************************ 00:10:38.819 START TEST nvmf_bdevio 00:10:38.819 ************************************ 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:38.819 * Looking for test storage... 00:10:38.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.819 --rc genhtml_branch_coverage=1 00:10:38.819 --rc genhtml_function_coverage=1 00:10:38.819 --rc genhtml_legend=1 00:10:38.819 --rc geninfo_all_blocks=1 00:10:38.819 --rc geninfo_unexecuted_blocks=1 00:10:38.819 00:10:38.819 ' 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.819 --rc genhtml_branch_coverage=1 00:10:38.819 --rc genhtml_function_coverage=1 00:10:38.819 --rc genhtml_legend=1 00:10:38.819 --rc geninfo_all_blocks=1 00:10:38.819 --rc geninfo_unexecuted_blocks=1 00:10:38.819 00:10:38.819 ' 00:10:38.819 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.820 --rc genhtml_branch_coverage=1 00:10:38.820 --rc genhtml_function_coverage=1 00:10:38.820 --rc genhtml_legend=1 00:10:38.820 --rc geninfo_all_blocks=1 00:10:38.820 --rc geninfo_unexecuted_blocks=1 00:10:38.820 00:10:38.820 ' 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.820 --rc genhtml_branch_coverage=1 00:10:38.820 --rc genhtml_function_coverage=1 00:10:38.820 --rc genhtml_legend=1 00:10:38.820 --rc geninfo_all_blocks=1 00:10:38.820 --rc geninfo_unexecuted_blocks=1 00:10:38.820 00:10:38.820 ' 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.820 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.079 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:39.079 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:39.080 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.455 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:44.714 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:44.714 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:44.714 Found net devices under 0000:86:00.0: cvl_0_0 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:44.714 Found net devices under 0000:86:00.1: cvl_0_1 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:44.714 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:44.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:10:44.715 00:10:44.715 --- 10.0.0.2 ping statistics --- 00:10:44.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.715 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:10:44.715 00:10:44.715 --- 10.0.0.1 ping statistics --- 00:10:44.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.715 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.715 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3638358 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3638358 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3638358 ']' 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.974 19:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 [2024-11-26 19:12:07.909540] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:10:44.974 [2024-11-26 19:12:07.909581] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.974 [2024-11-26 19:12:07.985043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.974 [2024-11-26 19:12:08.024488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.974 [2024-11-26 19:12:08.024527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.974 [2024-11-26 19:12:08.024534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.974 [2024-11-26 19:12:08.024540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.974 [2024-11-26 19:12:08.024546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.974 [2024-11-26 19:12:08.026055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:44.974 [2024-11-26 19:12:08.026168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:44.974 [2024-11-26 19:12:08.026254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.974 [2024-11-26 19:12:08.026254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 [2024-11-26 19:12:08.175610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 Malloc0 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 [2024-11-26 19:12:08.237211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:45.233 { 00:10:45.233 "params": { 00:10:45.233 "name": "Nvme$subsystem", 00:10:45.233 "trtype": "$TEST_TRANSPORT", 00:10:45.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:45.233 "adrfam": "ipv4", 00:10:45.233 "trsvcid": "$NVMF_PORT", 00:10:45.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:45.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:45.233 "hdgst": ${hdgst:-false}, 00:10:45.233 "ddgst": ${ddgst:-false} 00:10:45.233 }, 00:10:45.233 "method": "bdev_nvme_attach_controller" 00:10:45.233 } 00:10:45.233 EOF 00:10:45.233 )") 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:45.233 19:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:45.233 "params": { 00:10:45.233 "name": "Nvme1", 00:10:45.233 "trtype": "tcp", 00:10:45.233 "traddr": "10.0.0.2", 00:10:45.233 "adrfam": "ipv4", 00:10:45.233 "trsvcid": "4420", 00:10:45.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:45.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:45.233 "hdgst": false, 00:10:45.233 "ddgst": false 00:10:45.233 }, 00:10:45.233 "method": "bdev_nvme_attach_controller" 00:10:45.233 }' 00:10:45.233 [2024-11-26 19:12:08.289854] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:10:45.233 [2024-11-26 19:12:08.289902] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638510 ] 00:10:45.491 [2024-11-26 19:12:08.367365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:45.491 [2024-11-26 19:12:08.411005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.491 [2024-11-26 19:12:08.411113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.491 [2024-11-26 19:12:08.411113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.748 I/O targets: 00:10:45.748 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:45.748 00:10:45.748 00:10:45.748 CUnit - A unit testing framework for C - Version 2.1-3 00:10:45.748 http://cunit.sourceforge.net/ 00:10:45.748 00:10:45.748 00:10:45.748 Suite: bdevio tests on: Nvme1n1 00:10:45.748 Test: blockdev write read block ...passed 00:10:45.748 Test: blockdev write zeroes read block ...passed 00:10:45.748 Test: blockdev write zeroes read no split ...passed 00:10:45.748 Test: blockdev write zeroes read split ...passed 00:10:45.748 Test: blockdev write zeroes read split partial ...passed 00:10:45.748 Test: blockdev reset ...[2024-11-26 19:12:08.721776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:45.748 [2024-11-26 19:12:08.721838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec9350 (9): Bad file descriptor 00:10:45.748 [2024-11-26 19:12:08.774834] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:45.748 passed 00:10:45.748 Test: blockdev write read 8 blocks ...passed 00:10:45.748 Test: blockdev write read size > 128k ...passed 00:10:45.748 Test: blockdev write read invalid size ...passed 00:10:45.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:45.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:45.748 Test: blockdev write read max offset ...passed 00:10:46.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:46.005 Test: blockdev writev readv 8 blocks ...passed 00:10:46.005 Test: blockdev writev readv 30 x 1block ...passed 00:10:46.005 Test: blockdev writev readv block ...passed 00:10:46.005 Test: blockdev writev readv size > 128k ...passed 00:10:46.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:46.005 Test: blockdev comparev and writev ...[2024-11-26 19:12:08.987526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.005 [2024-11-26 19:12:08.987554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:08.987568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.005 [2024-11-26 19:12:08.987576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:08.987824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.005 [2024-11-26 19:12:08.987835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:08.987847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.005 [2024-11-26 19:12:08.987854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:08.988088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.005 [2024-11-26 19:12:08.988097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:08.988108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.005 [2024-11-26 19:12:08.988115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:08.988338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.005 [2024-11-26 19:12:08.988347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:08.988359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:46.005 [2024-11-26 19:12:08.988366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:46.005 passed 00:10:46.005 Test: blockdev nvme passthru rw ...passed 00:10:46.005 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:12:09.070119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:46.005 [2024-11-26 19:12:09.070134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:09.070241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:46.005 [2024-11-26 19:12:09.070250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:09.070349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:46.005 [2024-11-26 19:12:09.070358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:46.005 [2024-11-26 19:12:09.070460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:46.005 [2024-11-26 19:12:09.070469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:46.005 passed 00:10:46.005 Test: blockdev nvme admin passthru ...passed 00:10:46.266 Test: blockdev copy ...passed 00:10:46.266 00:10:46.266 Run Summary: Type Total Ran Passed Failed Inactive 00:10:46.266 suites 1 1 n/a 0 0 00:10:46.266 tests 23 23 23 0 0 00:10:46.266 asserts 152 152 152 0 n/a 00:10:46.266 00:10:46.266 Elapsed time = 1.042 seconds 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.266 rmmod nvme_tcp 00:10:46.266 rmmod nvme_fabrics 00:10:46.266 rmmod nvme_keyring 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3638358 ']' 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3638358 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3638358 ']' 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3638358 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.266 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3638358 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3638358' 00:10:46.528 killing process with pid 3638358 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3638358 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3638358 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.528 19:12:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.064 00:10:49.064 real 0m9.908s 00:10:49.064 user 0m9.757s 00:10:49.064 sys 0m5.034s 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.064 ************************************ 00:10:49.064 END TEST nvmf_bdevio 00:10:49.064 ************************************ 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:49.064 00:10:49.064 real 4m37.367s 00:10:49.064 user 10m25.672s 00:10:49.064 sys 1m37.284s 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.064 ************************************ 00:10:49.064 END TEST nvmf_target_core 00:10:49.064 ************************************ 00:10:49.064 19:12:11 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:49.064 19:12:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.064 19:12:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.064 19:12:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:49.064 ************************************ 00:10:49.064 START TEST nvmf_target_extra 00:10:49.064 ************************************ 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:49.064 * Looking for test storage... 00:10:49.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.064 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:49.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.065 --rc genhtml_branch_coverage=1 00:10:49.065 --rc genhtml_function_coverage=1 00:10:49.065 --rc genhtml_legend=1 00:10:49.065 --rc geninfo_all_blocks=1 00:10:49.065 --rc geninfo_unexecuted_blocks=1 00:10:49.065 00:10:49.065 ' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:49.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.065 --rc genhtml_branch_coverage=1 00:10:49.065 --rc genhtml_function_coverage=1 00:10:49.065 --rc genhtml_legend=1 00:10:49.065 --rc geninfo_all_blocks=1 00:10:49.065 --rc geninfo_unexecuted_blocks=1 00:10:49.065 00:10:49.065 ' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:49.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.065 --rc genhtml_branch_coverage=1 00:10:49.065 --rc genhtml_function_coverage=1 00:10:49.065 --rc genhtml_legend=1 00:10:49.065 --rc geninfo_all_blocks=1 00:10:49.065 --rc geninfo_unexecuted_blocks=1 00:10:49.065 00:10:49.065 ' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:49.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.065 --rc genhtml_branch_coverage=1 00:10:49.065 --rc genhtml_function_coverage=1 00:10:49.065 --rc genhtml_legend=1 00:10:49.065 --rc geninfo_all_blocks=1 00:10:49.065 --rc geninfo_unexecuted_blocks=1 00:10:49.065 00:10:49.065 ' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:49.065 ************************************ 00:10:49.065 START TEST nvmf_example 00:10:49.065 ************************************ 00:10:49.065 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:49.065 * Looking for test storage... 00:10:49.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.065 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:49.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.066 --rc genhtml_branch_coverage=1 00:10:49.066 --rc genhtml_function_coverage=1 00:10:49.066 --rc genhtml_legend=1 00:10:49.066 --rc geninfo_all_blocks=1 00:10:49.066 --rc geninfo_unexecuted_blocks=1 00:10:49.066 00:10:49.066 ' 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:49.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.066 --rc genhtml_branch_coverage=1 00:10:49.066 --rc genhtml_function_coverage=1 00:10:49.066 --rc genhtml_legend=1 00:10:49.066 --rc geninfo_all_blocks=1 00:10:49.066 --rc geninfo_unexecuted_blocks=1 00:10:49.066 00:10:49.066 ' 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:49.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.066 --rc genhtml_branch_coverage=1 00:10:49.066 --rc genhtml_function_coverage=1 00:10:49.066 --rc genhtml_legend=1 00:10:49.066 --rc geninfo_all_blocks=1 00:10:49.066 --rc geninfo_unexecuted_blocks=1 00:10:49.066 00:10:49.066 ' 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:49.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.066 --rc genhtml_branch_coverage=1 00:10:49.066 --rc genhtml_function_coverage=1 00:10:49.066 --rc genhtml_legend=1 00:10:49.066 --rc geninfo_all_blocks=1 00:10:49.066 --rc geninfo_unexecuted_blocks=1 00:10:49.066 00:10:49.066 ' 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.066 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.325 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.326 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:55.894 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.894 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:55.895 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:55.895 Found net devices under 0000:86:00.0: cvl_0_0 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:55.895 Found net devices under 0000:86:00.1: cvl_0_1 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.895 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.895 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.895 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.895 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.895 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:10:55.895 00:10:55.895 --- 10.0.0.2 ping statistics --- 00:10:55.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.895 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:10:55.895 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:10:55.895 00:10:55.895 --- 10.0.0.1 ping statistics --- 00:10:55.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.895 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:10:55.895 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.895 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3642316 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3642316 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3642316 ']' 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.896 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.157 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.157 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:56.157 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:56.157 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.157 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:56.158 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:08.350 Initializing NVMe Controllers 00:11:08.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:08.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:08.350 Initialization complete. Launching workers. 00:11:08.350 ======================================================== 00:11:08.350 Latency(us) 00:11:08.350 Device Information : IOPS MiB/s Average min max 00:11:08.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18217.16 71.16 3512.42 542.20 16192.73 00:11:08.350 ======================================================== 00:11:08.350 Total : 18217.16 71.16 3512.42 542.20 16192.73 00:11:08.350 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.350 rmmod nvme_tcp 00:11:08.350 rmmod nvme_fabrics 00:11:08.350 rmmod nvme_keyring 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3642316 ']' 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3642316 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3642316 ']' 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3642316 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3642316 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3642316' 00:11:08.350 killing process with pid 3642316 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3642316 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3642316 00:11:08.350 nvmf threads initialize successfully 00:11:08.350 bdev subsystem init successfully 00:11:08.350 created a nvmf target service 00:11:08.350 create targets's poll groups done 00:11:08.350 all subsystems of target started 00:11:08.350 nvmf target is running 00:11:08.350 all subsystems of target stopped 00:11:08.350 destroy targets's poll groups done 00:11:08.350 destroyed the nvmf target service 00:11:08.350 bdev subsystem finish successfully 00:11:08.350 nvmf threads destroy successfully 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.350 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.917 00:11:08.917 real 0m19.832s 00:11:08.917 user 0m46.161s 00:11:08.917 sys 0m6.110s 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.917 ************************************ 00:11:08.917 END TEST nvmf_example 00:11:08.917 ************************************ 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.917 ************************************ 00:11:08.917 START TEST nvmf_filesystem 00:11:08.917 ************************************ 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:08.917 * Looking for test storage... 00:11:08.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:08.917 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.180 --rc genhtml_branch_coverage=1 00:11:09.180 --rc genhtml_function_coverage=1 00:11:09.180 --rc genhtml_legend=1 00:11:09.180 --rc geninfo_all_blocks=1 00:11:09.180 --rc geninfo_unexecuted_blocks=1 00:11:09.180 00:11:09.180 ' 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.180 --rc genhtml_branch_coverage=1 00:11:09.180 --rc genhtml_function_coverage=1 00:11:09.180 --rc genhtml_legend=1 00:11:09.180 --rc geninfo_all_blocks=1 00:11:09.180 --rc geninfo_unexecuted_blocks=1 00:11:09.180 00:11:09.180 ' 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.180 --rc genhtml_branch_coverage=1 00:11:09.180 --rc genhtml_function_coverage=1 00:11:09.180 --rc genhtml_legend=1 00:11:09.180 --rc geninfo_all_blocks=1 00:11:09.180 --rc geninfo_unexecuted_blocks=1 00:11:09.180 00:11:09.180 ' 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.180 --rc genhtml_branch_coverage=1 00:11:09.180 --rc genhtml_function_coverage=1 00:11:09.180 --rc genhtml_legend=1 00:11:09.180 --rc geninfo_all_blocks=1 00:11:09.180 --rc geninfo_unexecuted_blocks=1 00:11:09.180 00:11:09.180 ' 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:09.180 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:09.181 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:09.181 #define SPDK_CONFIG_H 00:11:09.181 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:09.181 #define SPDK_CONFIG_APPS 1 00:11:09.181 #define SPDK_CONFIG_ARCH native 00:11:09.181 #undef SPDK_CONFIG_ASAN 00:11:09.181 #undef SPDK_CONFIG_AVAHI 00:11:09.181 #undef SPDK_CONFIG_CET 00:11:09.181 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:09.181 #define SPDK_CONFIG_COVERAGE 1 00:11:09.181 #define SPDK_CONFIG_CROSS_PREFIX 00:11:09.181 #undef SPDK_CONFIG_CRYPTO 00:11:09.181 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:09.181 #undef SPDK_CONFIG_CUSTOMOCF 00:11:09.181 #undef SPDK_CONFIG_DAOS 00:11:09.181 #define SPDK_CONFIG_DAOS_DIR 00:11:09.181 #define SPDK_CONFIG_DEBUG 1 00:11:09.181 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:09.181 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:09.181 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:09.181 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:09.181 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:09.182 #undef SPDK_CONFIG_DPDK_UADK 00:11:09.182 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:09.182 #define SPDK_CONFIG_EXAMPLES 1 00:11:09.182 #undef SPDK_CONFIG_FC 00:11:09.182 #define SPDK_CONFIG_FC_PATH 00:11:09.182 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:09.182 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:09.182 #define SPDK_CONFIG_FSDEV 1 00:11:09.182 #undef SPDK_CONFIG_FUSE 00:11:09.182 #undef SPDK_CONFIG_FUZZER 00:11:09.182 #define SPDK_CONFIG_FUZZER_LIB 00:11:09.182 #undef SPDK_CONFIG_GOLANG 00:11:09.182 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:09.182 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:09.182 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:09.182 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:09.182 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:09.182 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:09.182 #undef SPDK_CONFIG_HAVE_LZ4 00:11:09.182 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:09.182 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:09.182 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:09.182 #define SPDK_CONFIG_IDXD 1 00:11:09.182 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:09.182 #undef SPDK_CONFIG_IPSEC_MB 00:11:09.182 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:09.182 #define SPDK_CONFIG_ISAL 1 00:11:09.182 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:09.182 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:09.182 #define SPDK_CONFIG_LIBDIR 00:11:09.182 #undef SPDK_CONFIG_LTO 00:11:09.182 #define SPDK_CONFIG_MAX_LCORES 128 00:11:09.182 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:09.182 #define SPDK_CONFIG_NVME_CUSE 1 00:11:09.182 #undef SPDK_CONFIG_OCF 00:11:09.182 #define SPDK_CONFIG_OCF_PATH 00:11:09.182 #define SPDK_CONFIG_OPENSSL_PATH 00:11:09.182 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:09.182 #define SPDK_CONFIG_PGO_DIR 00:11:09.182 #undef SPDK_CONFIG_PGO_USE 00:11:09.182 #define SPDK_CONFIG_PREFIX /usr/local 00:11:09.182 #undef SPDK_CONFIG_RAID5F 00:11:09.182 #undef SPDK_CONFIG_RBD 00:11:09.182 #define SPDK_CONFIG_RDMA 1 00:11:09.182 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:09.182 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:09.182 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:09.182 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:09.182 #define SPDK_CONFIG_SHARED 1 00:11:09.182 #undef SPDK_CONFIG_SMA 00:11:09.182 #define SPDK_CONFIG_TESTS 1 00:11:09.182 #undef SPDK_CONFIG_TSAN 00:11:09.182 #define SPDK_CONFIG_UBLK 1 00:11:09.182 #define SPDK_CONFIG_UBSAN 1 00:11:09.182 #undef SPDK_CONFIG_UNIT_TESTS 00:11:09.182 #undef SPDK_CONFIG_URING 00:11:09.182 #define SPDK_CONFIG_URING_PATH 00:11:09.182 #undef SPDK_CONFIG_URING_ZNS 00:11:09.182 #undef SPDK_CONFIG_USDT 00:11:09.182 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:09.182 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:09.182 #define SPDK_CONFIG_VFIO_USER 1 00:11:09.182 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:09.182 #define SPDK_CONFIG_VHOST 1 00:11:09.182 #define SPDK_CONFIG_VIRTIO 1 00:11:09.182 #undef SPDK_CONFIG_VTUNE 00:11:09.182 #define SPDK_CONFIG_VTUNE_DIR 00:11:09.182 #define SPDK_CONFIG_WERROR 1 00:11:09.182 #define SPDK_CONFIG_WPDK_DIR 00:11:09.182 #undef SPDK_CONFIG_XNVME 00:11:09.182 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:09.182 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:09.183 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:09.184 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3644614 ]] 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3644614 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.HBXjhW 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HBXjhW/tests/target /tmp/spdk.HBXjhW 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189722345472 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963936768 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6241591296 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971937280 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981968384 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169728512 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192788992 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23060480 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981427712 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981968384 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=540672 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596378112 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596390400 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596378112 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596390400 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:09.185 * Looking for test storage... 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189722345472 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8456183808 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:09.185 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.186 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.445 --rc genhtml_branch_coverage=1 00:11:09.445 --rc genhtml_function_coverage=1 00:11:09.445 --rc genhtml_legend=1 00:11:09.445 --rc geninfo_all_blocks=1 00:11:09.445 --rc geninfo_unexecuted_blocks=1 00:11:09.445 00:11:09.445 ' 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.445 --rc genhtml_branch_coverage=1 00:11:09.445 --rc genhtml_function_coverage=1 00:11:09.445 --rc genhtml_legend=1 00:11:09.445 --rc geninfo_all_blocks=1 00:11:09.445 --rc geninfo_unexecuted_blocks=1 00:11:09.445 00:11:09.445 ' 00:11:09.445 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.445 --rc genhtml_branch_coverage=1 00:11:09.445 --rc genhtml_function_coverage=1 00:11:09.445 --rc genhtml_legend=1 00:11:09.445 --rc geninfo_all_blocks=1 00:11:09.445 --rc geninfo_unexecuted_blocks=1 00:11:09.446 00:11:09.446 ' 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.446 --rc genhtml_branch_coverage=1 00:11:09.446 --rc genhtml_function_coverage=1 00:11:09.446 --rc genhtml_legend=1 00:11:09.446 --rc geninfo_all_blocks=1 00:11:09.446 --rc geninfo_unexecuted_blocks=1 00:11:09.446 00:11:09.446 ' 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.446 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.018 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:16.019 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:16.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:16.019 Found net devices under 0000:86:00.0: cvl_0_0 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:16.019 Found net devices under 0000:86:00.1: cvl_0_1 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:11:16.019 00:11:16.019 --- 10.0.0.2 ping statistics --- 00:11:16.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.019 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:11:16.019 00:11:16.019 --- 10.0.0.1 ping statistics --- 00:11:16.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.019 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.019 ************************************ 00:11:16.019 START TEST nvmf_filesystem_no_in_capsule 00:11:16.019 ************************************ 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3647868 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3647868 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.019 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3647868 ']' 00:11:16.020 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.020 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.020 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.020 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.020 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.020 [2024-11-26 19:12:38.470642] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:11:16.020 [2024-11-26 19:12:38.470697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.020 [2024-11-26 19:12:38.551945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.020 [2024-11-26 19:12:38.594094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.020 [2024-11-26 19:12:38.594133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.020 [2024-11-26 19:12:38.594139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.020 [2024-11-26 19:12:38.594145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.020 [2024-11-26 19:12:38.594150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.020 [2024-11-26 19:12:38.595565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.020 [2024-11-26 19:12:38.595704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.020 [2024-11-26 19:12:38.595775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.020 [2024-11-26 19:12:38.595776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.275 [2024-11-26 19:12:39.359442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.275 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.533 Malloc1 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.533 [2024-11-26 19:12:39.526115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:16.533 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:16.534 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:16.534 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:16.534 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:16.534 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:16.534 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.534 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.534 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.534 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:16.534 { 00:11:16.534 "name": "Malloc1", 00:11:16.534 "aliases": [ 00:11:16.534 "6bd8ebf0-721d-43a0-97ae-30a5f667c017" 00:11:16.534 ], 00:11:16.534 "product_name": "Malloc disk", 00:11:16.534 "block_size": 512, 00:11:16.534 "num_blocks": 1048576, 00:11:16.534 "uuid": "6bd8ebf0-721d-43a0-97ae-30a5f667c017", 00:11:16.534 "assigned_rate_limits": { 00:11:16.534 "rw_ios_per_sec": 0, 00:11:16.534 "rw_mbytes_per_sec": 0, 00:11:16.534 "r_mbytes_per_sec": 0, 00:11:16.534 "w_mbytes_per_sec": 0 00:11:16.534 }, 00:11:16.534 "claimed": true, 00:11:16.534 "claim_type": "exclusive_write", 00:11:16.534 "zoned": false, 00:11:16.534 "supported_io_types": { 00:11:16.534 "read": true, 00:11:16.534 "write": true, 00:11:16.534 "unmap": true, 00:11:16.534 "flush": true, 00:11:16.534 "reset": true, 00:11:16.534 "nvme_admin": false, 00:11:16.534 "nvme_io": false, 00:11:16.534 "nvme_io_md": false, 00:11:16.534 "write_zeroes": true, 00:11:16.534 "zcopy": true, 00:11:16.534 "get_zone_info": false, 00:11:16.534 "zone_management": false, 00:11:16.534 "zone_append": false, 00:11:16.534 "compare": false, 00:11:16.534 "compare_and_write": false, 00:11:16.534 "abort": true, 00:11:16.534 "seek_hole": false, 00:11:16.534 "seek_data": false, 00:11:16.534 "copy": true, 00:11:16.534 "nvme_iov_md": false 00:11:16.534 }, 00:11:16.534 "memory_domains": [ 00:11:16.534 { 00:11:16.534 "dma_device_id": "system", 00:11:16.534 "dma_device_type": 1 00:11:16.534 }, 00:11:16.534 { 00:11:16.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.534 "dma_device_type": 2 00:11:16.534 } 00:11:16.534 ], 00:11:16.534 "driver_specific": {} 00:11:16.534 } 00:11:16.534 ]' 00:11:16.535 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:16.535 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:16.535 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:16.535 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:16.535 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:16.535 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:16.535 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:16.535 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.905 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.905 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:17.905 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.905 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:17.905 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:19.799 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:20.056 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.987 ************************************ 00:11:20.987 START TEST filesystem_ext4 00:11:20.987 ************************************ 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:20.987 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:20.987 mke2fs 1.47.0 (5-Feb-2023) 00:11:21.244 Discarding device blocks: 0/522240 done 00:11:21.244 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:21.244 Filesystem UUID: 6dbe09c2-7346-40cb-b83e-32c25e637c46 00:11:21.244 Superblock backups stored on blocks: 00:11:21.244 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:21.244 00:11:21.244 Allocating group tables: 0/64 done 00:11:21.244 Writing inode tables: 0/64 done 00:11:21.244 Creating journal (8192 blocks): done 00:11:21.244 Writing superblocks and filesystem accounting information: 0/64 done 00:11:21.244 00:11:21.244 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:21.244 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3647868 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.794 00:11:27.794 real 0m6.157s 00:11:27.794 user 0m0.022s 00:11:27.794 sys 0m0.079s 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:27.794 ************************************ 00:11:27.794 END TEST filesystem_ext4 00:11:27.794 ************************************ 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.794 ************************************ 00:11:27.794 START TEST filesystem_btrfs 00:11:27.794 ************************************ 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:27.794 btrfs-progs v6.8.1 00:11:27.794 See https://btrfs.readthedocs.io for more information. 00:11:27.794 00:11:27.794 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:27.794 NOTE: several default settings have changed in version 5.15, please make sure 00:11:27.794 this does not affect your deployments: 00:11:27.794 - DUP for metadata (-m dup) 00:11:27.794 - enabled no-holes (-O no-holes) 00:11:27.794 - enabled free-space-tree (-R free-space-tree) 00:11:27.794 00:11:27.794 Label: (null) 00:11:27.794 UUID: a0d23641-4e6e-4840-a82f-54d5e88c5f0a 00:11:27.794 Node size: 16384 00:11:27.794 Sector size: 4096 (CPU page size: 4096) 00:11:27.794 Filesystem size: 510.00MiB 00:11:27.794 Block group profiles: 00:11:27.794 Data: single 8.00MiB 00:11:27.794 Metadata: DUP 32.00MiB 00:11:27.794 System: DUP 8.00MiB 00:11:27.794 SSD detected: yes 00:11:27.794 Zoned device: no 00:11:27.794 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:27.794 Checksum: crc32c 00:11:27.794 Number of devices: 1 00:11:27.794 Devices: 00:11:27.794 ID SIZE PATH 00:11:27.794 1 510.00MiB /dev/nvme0n1p1 00:11:27.794 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3647868 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.794 00:11:27.794 real 0m0.580s 00:11:27.794 user 0m0.033s 00:11:27.794 sys 0m0.103s 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.794 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.794 ************************************ 00:11:27.795 END TEST filesystem_btrfs 00:11:27.795 ************************************ 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.052 ************************************ 00:11:28.052 START TEST filesystem_xfs 00:11:28.052 ************************************ 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:28.052 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:28.052 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:28.052 = sectsz=512 attr=2, projid32bit=1 00:11:28.052 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:28.052 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:28.052 data = bsize=4096 blocks=130560, imaxpct=25 00:11:28.052 = sunit=0 swidth=0 blks 00:11:28.052 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:28.052 log =internal log bsize=4096 blocks=16384, version=2 00:11:28.052 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:28.052 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:28.982 Discarding blocks...Done. 00:11:28.982 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:28.982 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3647868 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.502 00:11:31.502 real 0m3.343s 00:11:31.502 user 0m0.023s 00:11:31.502 sys 0m0.076s 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.502 ************************************ 00:11:31.502 END TEST filesystem_xfs 00:11:31.502 ************************************ 00:11:31.502 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3647868 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3647868 ']' 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3647868 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3647868 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3647868' 00:11:31.760 killing process with pid 3647868 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3647868 00:11:31.760 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3647868 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:32.327 00:11:32.327 real 0m16.762s 00:11:32.327 user 1m6.114s 00:11:32.327 sys 0m1.351s 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.327 ************************************ 00:11:32.327 END TEST nvmf_filesystem_no_in_capsule 00:11:32.327 ************************************ 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.327 ************************************ 00:11:32.327 START TEST nvmf_filesystem_in_capsule 00:11:32.327 ************************************ 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3650857 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3650857 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3650857 ']' 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.327 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.327 [2024-11-26 19:12:55.301637] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:11:32.327 [2024-11-26 19:12:55.301690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.327 [2024-11-26 19:12:55.383005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.327 [2024-11-26 19:12:55.423521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.327 [2024-11-26 19:12:55.423560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.327 [2024-11-26 19:12:55.423567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.327 [2024-11-26 19:12:55.423572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.327 [2024-11-26 19:12:55.423577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.327 [2024-11-26 19:12:55.425037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.327 [2024-11-26 19:12:55.425142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.327 [2024-11-26 19:12:55.425281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.327 [2024-11-26 19:12:55.425283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 [2024-11-26 19:12:56.173227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 Malloc1 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 [2024-11-26 19:12:56.337852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:33.260 { 00:11:33.260 "name": "Malloc1", 00:11:33.260 "aliases": [ 00:11:33.260 "7b436155-a4eb-4124-a851-92a5c19cb139" 00:11:33.260 ], 00:11:33.260 "product_name": "Malloc disk", 00:11:33.260 "block_size": 512, 00:11:33.260 "num_blocks": 1048576, 00:11:33.260 "uuid": "7b436155-a4eb-4124-a851-92a5c19cb139", 00:11:33.260 "assigned_rate_limits": { 00:11:33.260 "rw_ios_per_sec": 0, 00:11:33.260 "rw_mbytes_per_sec": 0, 00:11:33.260 "r_mbytes_per_sec": 0, 00:11:33.260 "w_mbytes_per_sec": 0 00:11:33.260 }, 00:11:33.260 "claimed": true, 00:11:33.260 "claim_type": "exclusive_write", 00:11:33.260 "zoned": false, 00:11:33.260 "supported_io_types": { 00:11:33.260 "read": true, 00:11:33.260 "write": true, 00:11:33.260 "unmap": true, 00:11:33.260 "flush": true, 00:11:33.260 "reset": true, 00:11:33.260 "nvme_admin": false, 00:11:33.260 "nvme_io": false, 00:11:33.260 "nvme_io_md": false, 00:11:33.260 "write_zeroes": true, 00:11:33.260 "zcopy": true, 00:11:33.260 "get_zone_info": false, 00:11:33.260 "zone_management": false, 00:11:33.260 "zone_append": false, 00:11:33.260 "compare": false, 00:11:33.260 "compare_and_write": false, 00:11:33.260 "abort": true, 00:11:33.260 "seek_hole": false, 00:11:33.260 "seek_data": false, 00:11:33.260 "copy": true, 00:11:33.260 "nvme_iov_md": false 00:11:33.260 }, 00:11:33.260 "memory_domains": [ 00:11:33.260 { 00:11:33.260 "dma_device_id": "system", 00:11:33.260 "dma_device_type": 1 00:11:33.260 }, 00:11:33.260 { 00:11:33.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.260 "dma_device_type": 2 00:11:33.260 } 00:11:33.260 ], 00:11:33.260 "driver_specific": {} 00:11:33.260 } 00:11:33.260 ]' 00:11:33.260 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:33.517 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:33.517 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:33.517 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:33.517 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:33.517 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:33.517 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:33.517 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.448 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.448 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.448 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.448 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.448 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:36.975 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.906 ************************************ 00:11:37.906 START TEST filesystem_in_capsule_ext4 00:11:37.906 ************************************ 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:37.906 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:37.906 mke2fs 1.47.0 (5-Feb-2023) 00:11:37.906 Discarding device blocks: 0/522240 done 00:11:38.163 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:38.163 Filesystem UUID: bf1e5af4-8ddc-4b99-bf5b-5fa624e25a67 00:11:38.163 Superblock backups stored on blocks: 00:11:38.163 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:38.163 00:11:38.163 Allocating group tables: 0/64 done 00:11:38.163 Writing inode tables: 0/64 done 00:11:38.421 Creating journal (8192 blocks): done 00:11:38.677 Writing superblocks and filesystem accounting information: 0/64 done 00:11:38.677 00:11:38.677 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:38.677 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3650857 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.226 00:11:45.226 real 0m6.677s 00:11:45.226 user 0m0.020s 00:11:45.226 sys 0m0.080s 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:45.226 ************************************ 00:11:45.226 END TEST filesystem_in_capsule_ext4 00:11:45.226 ************************************ 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.226 ************************************ 00:11:45.226 START TEST filesystem_in_capsule_btrfs 00:11:45.226 ************************************ 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:45.226 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:45.226 btrfs-progs v6.8.1 00:11:45.226 See https://btrfs.readthedocs.io for more information. 00:11:45.227 00:11:45.227 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:45.227 NOTE: several default settings have changed in version 5.15, please make sure 00:11:45.227 this does not affect your deployments: 00:11:45.227 - DUP for metadata (-m dup) 00:11:45.227 - enabled no-holes (-O no-holes) 00:11:45.227 - enabled free-space-tree (-R free-space-tree) 00:11:45.227 00:11:45.227 Label: (null) 00:11:45.227 UUID: 8d26d32d-b1cc-4097-898b-52103368db97 00:11:45.227 Node size: 16384 00:11:45.227 Sector size: 4096 (CPU page size: 4096) 00:11:45.227 Filesystem size: 510.00MiB 00:11:45.227 Block group profiles: 00:11:45.227 Data: single 8.00MiB 00:11:45.227 Metadata: DUP 32.00MiB 00:11:45.227 System: DUP 8.00MiB 00:11:45.227 SSD detected: yes 00:11:45.227 Zoned device: no 00:11:45.227 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:45.227 Checksum: crc32c 00:11:45.227 Number of devices: 1 00:11:45.227 Devices: 00:11:45.227 ID SIZE PATH 00:11:45.227 1 510.00MiB /dev/nvme0n1p1 00:11:45.227 00:11:45.227 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:45.227 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3650857 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.227 00:11:45.227 real 0m0.616s 00:11:45.227 user 0m0.031s 00:11:45.227 sys 0m0.108s 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:45.227 ************************************ 00:11:45.227 END TEST filesystem_in_capsule_btrfs 00:11:45.227 ************************************ 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.227 ************************************ 00:11:45.227 START TEST filesystem_in_capsule_xfs 00:11:45.227 ************************************ 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:45.227 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:45.484 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:45.484 = sectsz=512 attr=2, projid32bit=1 00:11:45.484 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:45.484 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:45.484 data = bsize=4096 blocks=130560, imaxpct=25 00:11:45.484 = sunit=0 swidth=0 blks 00:11:45.485 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:45.485 log =internal log bsize=4096 blocks=16384, version=2 00:11:45.485 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:45.485 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:46.049 Discarding blocks...Done. 00:11:46.049 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:46.049 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.943 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3650857 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.944 00:11:47.944 real 0m2.635s 00:11:47.944 user 0m0.028s 00:11:47.944 sys 0m0.070s 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.944 ************************************ 00:11:47.944 END TEST filesystem_in_capsule_xfs 00:11:47.944 ************************************ 00:11:47.944 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:48.200 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:48.200 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.457 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3650857 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3650857 ']' 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3650857 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3650857 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3650857' 00:11:48.458 killing process with pid 3650857 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3650857 00:11:48.458 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3650857 00:11:48.716 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:48.716 00:11:48.716 real 0m16.576s 00:11:48.716 user 1m5.301s 00:11:48.716 sys 0m1.436s 00:11:48.716 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.716 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.716 ************************************ 00:11:48.716 END TEST nvmf_filesystem_in_capsule 00:11:48.716 ************************************ 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.974 rmmod nvme_tcp 00:11:48.974 rmmod nvme_fabrics 00:11:48.974 rmmod nvme_keyring 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.974 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.878 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.136 00:11:51.136 real 0m42.101s 00:11:51.136 user 2m13.524s 00:11:51.136 sys 0m7.469s 00:11:51.136 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.136 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.136 ************************************ 00:11:51.136 END TEST nvmf_filesystem 00:11:51.136 ************************************ 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.136 ************************************ 00:11:51.136 START TEST nvmf_target_discovery 00:11:51.136 ************************************ 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:51.136 * Looking for test storage... 00:11:51.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.136 --rc genhtml_branch_coverage=1 00:11:51.136 --rc genhtml_function_coverage=1 00:11:51.136 --rc genhtml_legend=1 00:11:51.136 --rc geninfo_all_blocks=1 00:11:51.136 --rc geninfo_unexecuted_blocks=1 00:11:51.136 00:11:51.136 ' 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.136 --rc genhtml_branch_coverage=1 00:11:51.136 --rc genhtml_function_coverage=1 00:11:51.136 --rc genhtml_legend=1 00:11:51.136 --rc geninfo_all_blocks=1 00:11:51.136 --rc geninfo_unexecuted_blocks=1 00:11:51.136 00:11:51.136 ' 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.136 --rc genhtml_branch_coverage=1 00:11:51.136 --rc genhtml_function_coverage=1 00:11:51.136 --rc genhtml_legend=1 00:11:51.136 --rc geninfo_all_blocks=1 00:11:51.136 --rc geninfo_unexecuted_blocks=1 00:11:51.136 00:11:51.136 ' 00:11:51.136 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.136 --rc genhtml_branch_coverage=1 00:11:51.136 --rc genhtml_function_coverage=1 00:11:51.136 --rc genhtml_legend=1 00:11:51.136 --rc geninfo_all_blocks=1 00:11:51.136 --rc geninfo_unexecuted_blocks=1 00:11:51.136 00:11:51.136 ' 00:11:51.137 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.137 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.396 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.397 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.397 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.397 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:51.397 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:51.397 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:51.397 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:57.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:57.962 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:57.962 Found net devices under 0000:86:00.0: cvl_0_0 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:57.962 Found net devices under 0000:86:00.1: cvl_0_1 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:57.962 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.963 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:11:57.963 00:11:57.963 --- 10.0.0.2 ping statistics --- 00:11:57.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.963 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:11:57.963 00:11:57.963 --- 10.0.0.1 ping statistics --- 00:11:57.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.963 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3657161 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3657161 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3657161 ']' 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.963 [2024-11-26 19:13:20.331407] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:11:57.963 [2024-11-26 19:13:20.331451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.963 [2024-11-26 19:13:20.412033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.963 [2024-11-26 19:13:20.456746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.963 [2024-11-26 19:13:20.456779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.963 [2024-11-26 19:13:20.456787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.963 [2024-11-26 19:13:20.456793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.963 [2024-11-26 19:13:20.456798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.963 [2024-11-26 19:13:20.458369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.963 [2024-11-26 19:13:20.458399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.963 [2024-11-26 19:13:20.458531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.963 [2024-11-26 19:13:20.458532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.963 [2024-11-26 19:13:20.609040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.963 Null1 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.963 [2024-11-26 19:13:20.662851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.963 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 Null2 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 Null3 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 Null4 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:57.964 00:11:57.964 Discovery Log Number of Records 6, Generation counter 6 00:11:57.964 =====Discovery Log Entry 0====== 00:11:57.964 trtype: tcp 00:11:57.964 adrfam: ipv4 00:11:57.964 subtype: current discovery subsystem 00:11:57.964 treq: not required 00:11:57.964 portid: 0 00:11:57.964 trsvcid: 4420 00:11:57.964 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:57.964 traddr: 10.0.0.2 00:11:57.964 eflags: explicit discovery connections, duplicate discovery information 00:11:57.964 sectype: none 00:11:57.964 =====Discovery Log Entry 1====== 00:11:57.964 trtype: tcp 00:11:57.964 adrfam: ipv4 00:11:57.964 subtype: nvme subsystem 00:11:57.964 treq: not required 00:11:57.964 portid: 0 00:11:57.964 trsvcid: 4420 00:11:57.964 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:57.964 traddr: 10.0.0.2 00:11:57.964 eflags: none 00:11:57.964 sectype: none 00:11:57.964 =====Discovery Log Entry 2====== 00:11:57.964 trtype: tcp 00:11:57.964 adrfam: ipv4 00:11:57.964 subtype: nvme subsystem 00:11:57.964 treq: not required 00:11:57.964 portid: 0 00:11:57.964 trsvcid: 4420 00:11:57.964 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:57.964 traddr: 10.0.0.2 00:11:57.964 eflags: none 00:11:57.964 sectype: none 00:11:57.964 =====Discovery Log Entry 3====== 00:11:57.964 trtype: tcp 00:11:57.964 adrfam: ipv4 00:11:57.964 subtype: nvme subsystem 00:11:57.964 treq: not required 00:11:57.964 portid: 0 00:11:57.964 trsvcid: 4420 00:11:57.964 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:57.964 traddr: 10.0.0.2 00:11:57.964 eflags: none 00:11:57.964 sectype: none 00:11:57.964 =====Discovery Log Entry 4====== 00:11:57.964 trtype: tcp 00:11:57.964 adrfam: ipv4 00:11:57.964 subtype: nvme subsystem 00:11:57.964 treq: not required 00:11:57.964 portid: 0 00:11:57.964 trsvcid: 4420 00:11:57.964 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:57.964 traddr: 10.0.0.2 00:11:57.964 eflags: none 00:11:57.964 sectype: none 00:11:57.964 =====Discovery Log Entry 5====== 00:11:57.964 trtype: tcp 00:11:57.964 adrfam: ipv4 00:11:57.964 subtype: discovery subsystem referral 00:11:57.964 treq: not required 00:11:57.964 portid: 0 00:11:57.964 trsvcid: 4430 00:11:57.964 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:57.964 traddr: 10.0.0.2 00:11:57.964 eflags: none 00:11:57.964 sectype: none 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:57.964 Perform nvmf subsystem discovery via RPC 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 [ 00:11:57.964 { 00:11:57.964 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:57.964 "subtype": "Discovery", 00:11:57.964 "listen_addresses": [ 00:11:57.964 { 00:11:57.964 "trtype": "TCP", 00:11:57.964 "adrfam": "IPv4", 00:11:57.964 "traddr": "10.0.0.2", 00:11:57.964 "trsvcid": "4420" 00:11:57.964 } 00:11:57.964 ], 00:11:57.964 "allow_any_host": true, 00:11:57.964 "hosts": [] 00:11:57.964 }, 00:11:57.964 { 00:11:57.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:57.964 "subtype": "NVMe", 00:11:57.964 "listen_addresses": [ 00:11:57.964 { 00:11:57.964 "trtype": "TCP", 00:11:57.964 "adrfam": "IPv4", 00:11:57.964 "traddr": "10.0.0.2", 00:11:57.964 "trsvcid": "4420" 00:11:57.964 } 00:11:57.964 ], 00:11:57.965 "allow_any_host": true, 00:11:57.965 "hosts": [], 00:11:57.965 "serial_number": "SPDK00000000000001", 00:11:57.965 "model_number": "SPDK bdev Controller", 00:11:57.965 "max_namespaces": 32, 00:11:57.965 "min_cntlid": 1, 00:11:57.965 "max_cntlid": 65519, 00:11:57.965 "namespaces": [ 00:11:57.965 { 00:11:57.965 "nsid": 1, 00:11:57.965 "bdev_name": "Null1", 00:11:57.965 "name": "Null1", 00:11:57.965 "nguid": "FF26D80FCA2B458EA221F01FDBBB7AD5", 00:11:57.965 "uuid": "ff26d80f-ca2b-458e-a221-f01fdbbb7ad5" 00:11:57.965 } 00:11:57.965 ] 00:11:57.965 }, 00:11:57.965 { 00:11:57.965 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:57.965 "subtype": "NVMe", 00:11:57.965 "listen_addresses": [ 00:11:57.965 { 00:11:57.965 "trtype": "TCP", 00:11:57.965 "adrfam": "IPv4", 00:11:57.965 "traddr": "10.0.0.2", 00:11:57.965 "trsvcid": "4420" 00:11:57.965 } 00:11:57.965 ], 00:11:57.965 "allow_any_host": true, 00:11:57.965 "hosts": [], 00:11:57.965 "serial_number": "SPDK00000000000002", 00:11:57.965 "model_number": "SPDK bdev Controller", 00:11:57.965 "max_namespaces": 32, 00:11:57.965 "min_cntlid": 1, 00:11:57.965 "max_cntlid": 65519, 00:11:57.965 "namespaces": [ 00:11:57.965 { 00:11:57.965 "nsid": 1, 00:11:57.965 "bdev_name": "Null2", 00:11:57.965 "name": "Null2", 00:11:57.965 "nguid": "12F35131C8F748F382213852BF9731D2", 00:11:57.965 "uuid": "12f35131-c8f7-48f3-8221-3852bf9731d2" 00:11:57.965 } 00:11:57.965 ] 00:11:57.965 }, 00:11:57.965 { 00:11:57.965 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:57.965 "subtype": "NVMe", 00:11:57.965 "listen_addresses": [ 00:11:57.965 { 00:11:57.965 "trtype": "TCP", 00:11:57.965 "adrfam": "IPv4", 00:11:57.965 "traddr": "10.0.0.2", 00:11:57.965 "trsvcid": "4420" 00:11:57.965 } 00:11:57.965 ], 00:11:57.965 "allow_any_host": true, 00:11:57.965 "hosts": [], 00:11:57.965 "serial_number": "SPDK00000000000003", 00:11:57.965 "model_number": "SPDK bdev Controller", 00:11:57.965 "max_namespaces": 32, 00:11:57.965 "min_cntlid": 1, 00:11:57.965 "max_cntlid": 65519, 00:11:57.965 "namespaces": [ 00:11:57.965 { 00:11:57.965 "nsid": 1, 00:11:57.965 "bdev_name": "Null3", 00:11:57.965 "name": "Null3", 00:11:57.965 "nguid": "3E7A1AFBF8714E248F801D20485D1034", 00:11:57.965 "uuid": "3e7a1afb-f871-4e24-8f80-1d20485d1034" 00:11:57.965 } 00:11:57.965 ] 00:11:57.965 }, 00:11:57.965 { 00:11:57.965 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:57.965 "subtype": "NVMe", 00:11:57.965 "listen_addresses": [ 00:11:57.965 { 00:11:57.965 "trtype": "TCP", 00:11:57.965 "adrfam": "IPv4", 00:11:57.965 "traddr": "10.0.0.2", 00:11:57.965 "trsvcid": "4420" 00:11:57.965 } 00:11:57.965 ], 00:11:57.965 "allow_any_host": true, 00:11:57.965 "hosts": [], 00:11:57.965 "serial_number": "SPDK00000000000004", 00:11:57.965 "model_number": "SPDK bdev Controller", 00:11:57.965 "max_namespaces": 32, 00:11:57.965 "min_cntlid": 1, 00:11:57.965 "max_cntlid": 65519, 00:11:57.965 "namespaces": [ 00:11:57.965 { 00:11:57.965 "nsid": 1, 00:11:57.965 "bdev_name": "Null4", 00:11:57.965 "name": "Null4", 00:11:57.965 "nguid": "4CE79D991BB84029B587CF9BF49C87C3", 00:11:57.965 "uuid": "4ce79d99-1bb8-4029-b587-cf9bf49c87c3" 00:11:57.965 } 00:11:57.965 ] 00:11:57.965 } 00:11:57.965 ] 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.965 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:58.223 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.224 rmmod nvme_tcp 00:11:58.224 rmmod nvme_fabrics 00:11:58.224 rmmod nvme_keyring 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3657161 ']' 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3657161 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3657161 ']' 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3657161 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3657161 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3657161' 00:11:58.224 killing process with pid 3657161 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3657161 00:11:58.224 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3657161 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.483 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.389 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.389 00:12:00.389 real 0m9.437s 00:12:00.389 user 0m5.769s 00:12:00.389 sys 0m4.857s 00:12:00.389 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.389 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.389 ************************************ 00:12:00.648 END TEST nvmf_target_discovery 00:12:00.648 ************************************ 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.648 ************************************ 00:12:00.648 START TEST nvmf_referrals 00:12:00.648 ************************************ 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:00.648 * Looking for test storage... 00:12:00.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.648 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.649 --rc genhtml_branch_coverage=1 00:12:00.649 --rc genhtml_function_coverage=1 00:12:00.649 --rc genhtml_legend=1 00:12:00.649 --rc geninfo_all_blocks=1 00:12:00.649 --rc geninfo_unexecuted_blocks=1 00:12:00.649 00:12:00.649 ' 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.649 --rc genhtml_branch_coverage=1 00:12:00.649 --rc genhtml_function_coverage=1 00:12:00.649 --rc genhtml_legend=1 00:12:00.649 --rc geninfo_all_blocks=1 00:12:00.649 --rc geninfo_unexecuted_blocks=1 00:12:00.649 00:12:00.649 ' 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.649 --rc genhtml_branch_coverage=1 00:12:00.649 --rc genhtml_function_coverage=1 00:12:00.649 --rc genhtml_legend=1 00:12:00.649 --rc geninfo_all_blocks=1 00:12:00.649 --rc geninfo_unexecuted_blocks=1 00:12:00.649 00:12:00.649 ' 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.649 --rc genhtml_branch_coverage=1 00:12:00.649 --rc genhtml_function_coverage=1 00:12:00.649 --rc genhtml_legend=1 00:12:00.649 --rc geninfo_all_blocks=1 00:12:00.649 --rc geninfo_unexecuted_blocks=1 00:12:00.649 00:12:00.649 ' 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.649 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.907 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:07.470 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:07.470 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:07.470 Found net devices under 0000:86:00.0: cvl_0_0 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:07.470 Found net devices under 0000:86:00.1: cvl_0_1 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.470 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:12:07.471 00:12:07.471 --- 10.0.0.2 ping statistics --- 00:12:07.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.471 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:12:07.471 00:12:07.471 --- 10.0.0.1 ping statistics --- 00:12:07.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.471 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3660929 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3660929 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3660929 ']' 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.471 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.471 [2024-11-26 19:13:29.866732] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:12:07.471 [2024-11-26 19:13:29.866773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.471 [2024-11-26 19:13:29.946587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.471 [2024-11-26 19:13:29.989310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.471 [2024-11-26 19:13:29.989346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.471 [2024-11-26 19:13:29.989353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.471 [2024-11-26 19:13:29.989359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.471 [2024-11-26 19:13:29.989364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.471 [2024-11-26 19:13:29.990959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.471 [2024-11-26 19:13:29.991076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.471 [2024-11-26 19:13:29.991079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.471 [2024-11-26 19:13:29.990975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.730 [2024-11-26 19:13:30.757520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.730 [2024-11-26 19:13:30.786830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.730 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.987 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:07.987 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:07.987 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:07.987 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:07.987 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.988 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.988 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.988 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:07.988 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.988 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:08.244 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:08.501 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.758 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.016 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:09.292 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.293 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:09.550 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:09.550 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:09.550 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:09.550 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:09.550 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.550 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.808 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.808 rmmod nvme_tcp 00:12:10.065 rmmod nvme_fabrics 00:12:10.065 rmmod nvme_keyring 00:12:10.065 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.065 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:10.065 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:10.065 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3660929 ']' 00:12:10.065 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3660929 00:12:10.066 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3660929 ']' 00:12:10.066 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3660929 00:12:10.066 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:10.066 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.066 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660929 00:12:10.066 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.066 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.066 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660929' 00:12:10.066 killing process with pid 3660929 00:12:10.066 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3660929 00:12:10.066 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3660929 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.325 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.228 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.228 00:12:12.228 real 0m11.707s 00:12:12.228 user 0m15.508s 00:12:12.228 sys 0m5.359s 00:12:12.228 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.228 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.228 ************************************ 00:12:12.228 END TEST nvmf_referrals 00:12:12.228 ************************************ 00:12:12.228 19:13:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:12.228 19:13:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.228 19:13:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.228 19:13:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.487 ************************************ 00:12:12.487 START TEST nvmf_connect_disconnect 00:12:12.487 ************************************ 00:12:12.487 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:12.487 * Looking for test storage... 00:12:12.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.487 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.487 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.487 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.488 --rc genhtml_branch_coverage=1 00:12:12.488 --rc genhtml_function_coverage=1 00:12:12.488 --rc genhtml_legend=1 00:12:12.488 --rc geninfo_all_blocks=1 00:12:12.488 --rc geninfo_unexecuted_blocks=1 00:12:12.488 00:12:12.488 ' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.488 --rc genhtml_branch_coverage=1 00:12:12.488 --rc genhtml_function_coverage=1 00:12:12.488 --rc genhtml_legend=1 00:12:12.488 --rc geninfo_all_blocks=1 00:12:12.488 --rc geninfo_unexecuted_blocks=1 00:12:12.488 00:12:12.488 ' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.488 --rc genhtml_branch_coverage=1 00:12:12.488 --rc genhtml_function_coverage=1 00:12:12.488 --rc genhtml_legend=1 00:12:12.488 --rc geninfo_all_blocks=1 00:12:12.488 --rc geninfo_unexecuted_blocks=1 00:12:12.488 00:12:12.488 ' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.488 --rc genhtml_branch_coverage=1 00:12:12.488 --rc genhtml_function_coverage=1 00:12:12.488 --rc genhtml_legend=1 00:12:12.488 --rc geninfo_all_blocks=1 00:12:12.488 --rc geninfo_unexecuted_blocks=1 00:12:12.488 00:12:12.488 ' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.488 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.489 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.058 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:19.059 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:19.059 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:19.059 Found net devices under 0000:86:00.0: cvl_0_0 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:19.059 Found net devices under 0000:86:00.1: cvl_0_1 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:12:19.059 00:12:19.059 --- 10.0.0.2 ping statistics --- 00:12:19.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.059 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:19.059 00:12:19.059 --- 10.0.0.1 ping statistics --- 00:12:19.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.059 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3665012 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3665012 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3665012 ']' 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.059 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.059 [2024-11-26 19:13:41.580789] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:12:19.059 [2024-11-26 19:13:41.580842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.059 [2024-11-26 19:13:41.661390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.059 [2024-11-26 19:13:41.703662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.059 [2024-11-26 19:13:41.703702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.060 [2024-11-26 19:13:41.703709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.060 [2024-11-26 19:13:41.703715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.060 [2024-11-26 19:13:41.703720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.060 [2024-11-26 19:13:41.705191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.060 [2024-11-26 19:13:41.705300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.060 [2024-11-26 19:13:41.705407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.060 [2024-11-26 19:13:41.705409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 [2024-11-26 19:13:41.842553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 [2024-11-26 19:13:41.910881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:19.060 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:22.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.611 rmmod nvme_tcp 00:12:35.611 rmmod nvme_fabrics 00:12:35.611 rmmod nvme_keyring 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3665012 ']' 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3665012 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3665012 ']' 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3665012 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665012 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.611 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665012' 00:12:35.612 killing process with pid 3665012 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3665012 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3665012 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.612 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.520 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.520 00:12:37.520 real 0m25.165s 00:12:37.520 user 1m8.201s 00:12:37.520 sys 0m5.787s 00:12:37.520 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.520 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.520 ************************************ 00:12:37.520 END TEST nvmf_connect_disconnect 00:12:37.520 ************************************ 00:12:37.520 19:14:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:37.520 19:14:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.520 19:14:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.520 19:14:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.520 ************************************ 00:12:37.520 START TEST nvmf_multitarget 00:12:37.520 ************************************ 00:12:37.520 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:37.779 * Looking for test storage... 00:12:37.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.779 --rc genhtml_branch_coverage=1 00:12:37.779 --rc genhtml_function_coverage=1 00:12:37.779 --rc genhtml_legend=1 00:12:37.779 --rc geninfo_all_blocks=1 00:12:37.779 --rc geninfo_unexecuted_blocks=1 00:12:37.779 00:12:37.779 ' 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.779 --rc genhtml_branch_coverage=1 00:12:37.779 --rc genhtml_function_coverage=1 00:12:37.779 --rc genhtml_legend=1 00:12:37.779 --rc geninfo_all_blocks=1 00:12:37.779 --rc geninfo_unexecuted_blocks=1 00:12:37.779 00:12:37.779 ' 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.779 --rc genhtml_branch_coverage=1 00:12:37.779 --rc genhtml_function_coverage=1 00:12:37.779 --rc genhtml_legend=1 00:12:37.779 --rc geninfo_all_blocks=1 00:12:37.779 --rc geninfo_unexecuted_blocks=1 00:12:37.779 00:12:37.779 ' 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.779 --rc genhtml_branch_coverage=1 00:12:37.779 --rc genhtml_function_coverage=1 00:12:37.779 --rc genhtml_legend=1 00:12:37.779 --rc geninfo_all_blocks=1 00:12:37.779 --rc geninfo_unexecuted_blocks=1 00:12:37.779 00:12:37.779 ' 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.779 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.780 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:44.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:44.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:44.348 Found net devices under 0000:86:00.0: cvl_0_0 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:44.348 Found net devices under 0000:86:00.1: cvl_0_1 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.348 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:12:44.349 00:12:44.349 --- 10.0.0.2 ping statistics --- 00:12:44.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.349 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:12:44.349 00:12:44.349 --- 10.0.0.1 ping statistics --- 00:12:44.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.349 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3671412 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3671412 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3671412 ']' 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.349 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.349 [2024-11-26 19:14:06.838840] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:12:44.349 [2024-11-26 19:14:06.838883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.349 [2024-11-26 19:14:06.915647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.349 [2024-11-26 19:14:06.958856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.349 [2024-11-26 19:14:06.958896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.349 [2024-11-26 19:14:06.958903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.349 [2024-11-26 19:14:06.958909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.349 [2024-11-26 19:14:06.958914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.349 [2024-11-26 19:14:06.960463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.349 [2024-11-26 19:14:06.960570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.349 [2024-11-26 19:14:06.960594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.349 [2024-11-26 19:14:06.960594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:44.349 "nvmf_tgt_1" 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:44.349 "nvmf_tgt_2" 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:44.349 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:44.606 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:44.606 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:44.606 true 00:12:44.606 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:44.863 true 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:44.863 rmmod nvme_tcp 00:12:44.863 rmmod nvme_fabrics 00:12:44.863 rmmod nvme_keyring 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3671412 ']' 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3671412 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3671412 ']' 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3671412 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671412 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671412' 00:12:44.863 killing process with pid 3671412 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3671412 00:12:44.863 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3671412 00:12:45.122 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.122 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.123 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.656 00:12:47.656 real 0m9.616s 00:12:47.656 user 0m7.218s 00:12:47.656 sys 0m4.886s 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:47.656 ************************************ 00:12:47.656 END TEST nvmf_multitarget 00:12:47.656 ************************************ 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.656 ************************************ 00:12:47.656 START TEST nvmf_rpc 00:12:47.656 ************************************ 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:47.656 * Looking for test storage... 00:12:47.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.656 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.657 --rc genhtml_branch_coverage=1 00:12:47.657 --rc genhtml_function_coverage=1 00:12:47.657 --rc genhtml_legend=1 00:12:47.657 --rc geninfo_all_blocks=1 00:12:47.657 --rc geninfo_unexecuted_blocks=1 00:12:47.657 00:12:47.657 ' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.657 --rc genhtml_branch_coverage=1 00:12:47.657 --rc genhtml_function_coverage=1 00:12:47.657 --rc genhtml_legend=1 00:12:47.657 --rc geninfo_all_blocks=1 00:12:47.657 --rc geninfo_unexecuted_blocks=1 00:12:47.657 00:12:47.657 ' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.657 --rc genhtml_branch_coverage=1 00:12:47.657 --rc genhtml_function_coverage=1 00:12:47.657 --rc genhtml_legend=1 00:12:47.657 --rc geninfo_all_blocks=1 00:12:47.657 --rc geninfo_unexecuted_blocks=1 00:12:47.657 00:12:47.657 ' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.657 --rc genhtml_branch_coverage=1 00:12:47.657 --rc genhtml_function_coverage=1 00:12:47.657 --rc genhtml_legend=1 00:12:47.657 --rc geninfo_all_blocks=1 00:12:47.657 --rc geninfo_unexecuted_blocks=1 00:12:47.657 00:12:47.657 ' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.657 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:54.225 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:54.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:54.225 Found net devices under 0000:86:00.0: cvl_0_0 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.225 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:54.225 Found net devices under 0000:86:00.1: cvl_0_1 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:12:54.226 00:12:54.226 --- 10.0.0.2 ping statistics --- 00:12:54.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.226 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:12:54.226 00:12:54.226 --- 10.0.0.1 ping statistics --- 00:12:54.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.226 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3675240 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3675240 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3675240 ']' 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.226 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.226 [2024-11-26 19:14:16.496376] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:12:54.226 [2024-11-26 19:14:16.496420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.226 [2024-11-26 19:14:16.571173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.226 [2024-11-26 19:14:16.612092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.226 [2024-11-26 19:14:16.612130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.226 [2024-11-26 19:14:16.612137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.226 [2024-11-26 19:14:16.612142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.226 [2024-11-26 19:14:16.612148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.226 [2024-11-26 19:14:16.613623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.226 [2024-11-26 19:14:16.613742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.226 [2024-11-26 19:14:16.613798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.226 [2024-11-26 19:14:16.613799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.226 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.226 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:54.226 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.226 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.226 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:54.484 "tick_rate": 2100000000, 00:12:54.484 "poll_groups": [ 00:12:54.484 { 00:12:54.484 "name": "nvmf_tgt_poll_group_000", 00:12:54.484 "admin_qpairs": 0, 00:12:54.484 "io_qpairs": 0, 00:12:54.484 "current_admin_qpairs": 0, 00:12:54.484 "current_io_qpairs": 0, 00:12:54.484 "pending_bdev_io": 0, 00:12:54.484 "completed_nvme_io": 0, 00:12:54.484 "transports": [] 00:12:54.484 }, 00:12:54.484 { 00:12:54.484 "name": "nvmf_tgt_poll_group_001", 00:12:54.484 "admin_qpairs": 0, 00:12:54.484 "io_qpairs": 0, 00:12:54.484 "current_admin_qpairs": 0, 00:12:54.484 "current_io_qpairs": 0, 00:12:54.484 "pending_bdev_io": 0, 00:12:54.484 "completed_nvme_io": 0, 00:12:54.484 "transports": [] 00:12:54.484 }, 00:12:54.484 { 00:12:54.484 "name": "nvmf_tgt_poll_group_002", 00:12:54.484 "admin_qpairs": 0, 00:12:54.484 "io_qpairs": 0, 00:12:54.484 "current_admin_qpairs": 0, 00:12:54.484 "current_io_qpairs": 0, 00:12:54.484 "pending_bdev_io": 0, 00:12:54.484 "completed_nvme_io": 0, 00:12:54.484 "transports": [] 00:12:54.484 }, 00:12:54.484 { 00:12:54.484 "name": "nvmf_tgt_poll_group_003", 00:12:54.484 "admin_qpairs": 0, 00:12:54.484 "io_qpairs": 0, 00:12:54.484 "current_admin_qpairs": 0, 00:12:54.484 "current_io_qpairs": 0, 00:12:54.484 "pending_bdev_io": 0, 00:12:54.484 "completed_nvme_io": 0, 00:12:54.484 "transports": [] 00:12:54.484 } 00:12:54.484 ] 00:12:54.484 }' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.484 [2024-11-26 19:14:17.482295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:54.484 "tick_rate": 2100000000, 00:12:54.484 "poll_groups": [ 00:12:54.484 { 00:12:54.484 "name": "nvmf_tgt_poll_group_000", 00:12:54.484 "admin_qpairs": 0, 00:12:54.484 "io_qpairs": 0, 00:12:54.484 "current_admin_qpairs": 0, 00:12:54.484 "current_io_qpairs": 0, 00:12:54.484 "pending_bdev_io": 0, 00:12:54.484 "completed_nvme_io": 0, 00:12:54.484 "transports": [ 00:12:54.484 { 00:12:54.484 "trtype": "TCP" 00:12:54.484 } 00:12:54.484 ] 00:12:54.484 }, 00:12:54.484 { 00:12:54.484 "name": "nvmf_tgt_poll_group_001", 00:12:54.484 "admin_qpairs": 0, 00:12:54.484 "io_qpairs": 0, 00:12:54.484 "current_admin_qpairs": 0, 00:12:54.484 "current_io_qpairs": 0, 00:12:54.484 "pending_bdev_io": 0, 00:12:54.484 "completed_nvme_io": 0, 00:12:54.484 "transports": [ 00:12:54.484 { 00:12:54.484 "trtype": "TCP" 00:12:54.484 } 00:12:54.484 ] 00:12:54.484 }, 00:12:54.484 { 00:12:54.484 "name": "nvmf_tgt_poll_group_002", 00:12:54.484 "admin_qpairs": 0, 00:12:54.484 "io_qpairs": 0, 00:12:54.484 "current_admin_qpairs": 0, 00:12:54.484 "current_io_qpairs": 0, 00:12:54.484 "pending_bdev_io": 0, 00:12:54.484 "completed_nvme_io": 0, 00:12:54.484 "transports": [ 00:12:54.484 { 00:12:54.484 "trtype": "TCP" 00:12:54.484 } 00:12:54.484 ] 00:12:54.484 }, 00:12:54.484 { 00:12:54.484 "name": "nvmf_tgt_poll_group_003", 00:12:54.484 "admin_qpairs": 0, 00:12:54.484 "io_qpairs": 0, 00:12:54.484 "current_admin_qpairs": 0, 00:12:54.484 "current_io_qpairs": 0, 00:12:54.484 "pending_bdev_io": 0, 00:12:54.484 "completed_nvme_io": 0, 00:12:54.484 "transports": [ 00:12:54.484 { 00:12:54.484 "trtype": "TCP" 00:12:54.484 } 00:12:54.484 ] 00:12:54.484 } 00:12:54.484 ] 00:12:54.484 }' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:54.484 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 Malloc1 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 [2024-11-26 19:14:17.660102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:54.742 [2024-11-26 19:14:17.688614] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:54.742 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:54.742 could not add new controller: failed to write to nvme-fabrics device 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.742 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.113 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.113 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.113 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.113 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:56.113 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:58.008 19:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.008 [2024-11-26 19:14:21.003631] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:58.008 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:58.008 could not add new controller: failed to write to nvme-fabrics device 00:12:58.008 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:58.008 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.008 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.008 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.008 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:58.008 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.008 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.008 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.009 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.380 19:14:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.380 19:14:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.380 19:14:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.380 19:14:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:59.380 19:14:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.276 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.543 [2024-11-26 19:14:24.398436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.543 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.914 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.914 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.914 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.914 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:02.914 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.811 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.812 [2024-11-26 19:14:27.745409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.812 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.185 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.185 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.185 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.185 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:06.185 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.080 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.080 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.080 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.080 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:08.080 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.080 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:08.080 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 [2024-11-26 19:14:31.090694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.451 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.451 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:09.451 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.451 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:09.451 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.355 [2024-11-26 19:14:34.350600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.355 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.723 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.723 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:12.723 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.723 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:12.723 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:14.615 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:14.615 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:14.615 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.615 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:14.615 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.616 [2024-11-26 19:14:37.702427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.616 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.986 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.986 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:15.986 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.986 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:15.986 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.883 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.142 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:18.142 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.142 [2024-11-26 19:14:41.054138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.142 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 [2024-11-26 19:14:41.102227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 [2024-11-26 19:14:41.150346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 [2024-11-26 19:14:41.198507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.144 [2024-11-26 19:14:41.246674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.144 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:18.402 "tick_rate": 2100000000, 00:13:18.402 "poll_groups": [ 00:13:18.402 { 00:13:18.402 "name": "nvmf_tgt_poll_group_000", 00:13:18.402 "admin_qpairs": 2, 00:13:18.402 "io_qpairs": 168, 00:13:18.402 "current_admin_qpairs": 0, 00:13:18.402 "current_io_qpairs": 0, 00:13:18.402 "pending_bdev_io": 0, 00:13:18.402 "completed_nvme_io": 221, 00:13:18.402 "transports": [ 00:13:18.402 { 00:13:18.402 "trtype": "TCP" 00:13:18.402 } 00:13:18.402 ] 00:13:18.402 }, 00:13:18.402 { 00:13:18.402 "name": "nvmf_tgt_poll_group_001", 00:13:18.402 "admin_qpairs": 2, 00:13:18.402 "io_qpairs": 168, 00:13:18.402 "current_admin_qpairs": 0, 00:13:18.402 "current_io_qpairs": 0, 00:13:18.402 "pending_bdev_io": 0, 00:13:18.402 "completed_nvme_io": 218, 00:13:18.402 "transports": [ 00:13:18.402 { 00:13:18.402 "trtype": "TCP" 00:13:18.402 } 00:13:18.402 ] 00:13:18.402 }, 00:13:18.402 { 00:13:18.402 "name": "nvmf_tgt_poll_group_002", 00:13:18.402 "admin_qpairs": 1, 00:13:18.402 "io_qpairs": 168, 00:13:18.402 "current_admin_qpairs": 0, 00:13:18.402 "current_io_qpairs": 0, 00:13:18.402 "pending_bdev_io": 0, 00:13:18.402 "completed_nvme_io": 303, 00:13:18.402 "transports": [ 00:13:18.402 { 00:13:18.402 "trtype": "TCP" 00:13:18.402 } 00:13:18.402 ] 00:13:18.402 }, 00:13:18.402 { 00:13:18.402 "name": "nvmf_tgt_poll_group_003", 00:13:18.402 "admin_qpairs": 2, 00:13:18.402 "io_qpairs": 168, 00:13:18.402 "current_admin_qpairs": 0, 00:13:18.402 "current_io_qpairs": 0, 00:13:18.402 "pending_bdev_io": 0, 00:13:18.402 "completed_nvme_io": 280, 00:13:18.402 "transports": [ 00:13:18.402 { 00:13:18.402 "trtype": "TCP" 00:13:18.402 } 00:13:18.402 ] 00:13:18.402 } 00:13:18.402 ] 00:13:18.402 }' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.402 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.403 rmmod nvme_tcp 00:13:18.403 rmmod nvme_fabrics 00:13:18.403 rmmod nvme_keyring 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3675240 ']' 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3675240 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3675240 ']' 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3675240 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.403 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3675240 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3675240' 00:13:18.661 killing process with pid 3675240 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3675240 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3675240 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:18.661 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:18.662 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.662 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.662 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.662 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.662 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.197 00:13:21.197 real 0m33.503s 00:13:21.197 user 1m41.875s 00:13:21.197 sys 0m6.456s 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.197 ************************************ 00:13:21.197 END TEST nvmf_rpc 00:13:21.197 ************************************ 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.197 ************************************ 00:13:21.197 START TEST nvmf_invalid 00:13:21.197 ************************************ 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.197 * Looking for test storage... 00:13:21.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:21.197 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.197 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:21.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.198 --rc genhtml_branch_coverage=1 00:13:21.198 --rc genhtml_function_coverage=1 00:13:21.198 --rc genhtml_legend=1 00:13:21.198 --rc geninfo_all_blocks=1 00:13:21.198 --rc geninfo_unexecuted_blocks=1 00:13:21.198 00:13:21.198 ' 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:21.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.198 --rc genhtml_branch_coverage=1 00:13:21.198 --rc genhtml_function_coverage=1 00:13:21.198 --rc genhtml_legend=1 00:13:21.198 --rc geninfo_all_blocks=1 00:13:21.198 --rc geninfo_unexecuted_blocks=1 00:13:21.198 00:13:21.198 ' 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:21.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.198 --rc genhtml_branch_coverage=1 00:13:21.198 --rc genhtml_function_coverage=1 00:13:21.198 --rc genhtml_legend=1 00:13:21.198 --rc geninfo_all_blocks=1 00:13:21.198 --rc geninfo_unexecuted_blocks=1 00:13:21.198 00:13:21.198 ' 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:21.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.198 --rc genhtml_branch_coverage=1 00:13:21.198 --rc genhtml_function_coverage=1 00:13:21.198 --rc genhtml_legend=1 00:13:21.198 --rc geninfo_all_blocks=1 00:13:21.198 --rc geninfo_unexecuted_blocks=1 00:13:21.198 00:13:21.198 ' 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.198 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.199 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.769 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:27.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:27.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:27.770 Found net devices under 0000:86:00.0: cvl_0_0 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:27.770 Found net devices under 0000:86:00.1: cvl_0_1 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:13:27.770 00:13:27.770 --- 10.0.0.2 ping statistics --- 00:13:27.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.770 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:13:27.770 00:13:27.770 --- 10.0.0.1 ping statistics --- 00:13:27.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.770 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.770 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3683087 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3683087 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3683087 ']' 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.770 [2024-11-26 19:14:50.076192] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:13:27.770 [2024-11-26 19:14:50.076245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.770 [2024-11-26 19:14:50.155836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.770 [2024-11-26 19:14:50.198536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.770 [2024-11-26 19:14:50.198574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.770 [2024-11-26 19:14:50.198582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.770 [2024-11-26 19:14:50.198587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.770 [2024-11-26 19:14:50.198592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.770 [2024-11-26 19:14:50.200016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.770 [2024-11-26 19:14:50.200123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.770 [2024-11-26 19:14:50.200230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.770 [2024-11-26 19:14:50.200231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.770 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15722 00:13:27.771 [2024-11-26 19:14:50.506983] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:27.771 { 00:13:27.771 "nqn": "nqn.2016-06.io.spdk:cnode15722", 00:13:27.771 "tgt_name": "foobar", 00:13:27.771 "method": "nvmf_create_subsystem", 00:13:27.771 "req_id": 1 00:13:27.771 } 00:13:27.771 Got JSON-RPC error response 00:13:27.771 response: 00:13:27.771 { 00:13:27.771 "code": -32603, 00:13:27.771 "message": "Unable to find target foobar" 00:13:27.771 }' 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:27.771 { 00:13:27.771 "nqn": "nqn.2016-06.io.spdk:cnode15722", 00:13:27.771 "tgt_name": "foobar", 00:13:27.771 "method": "nvmf_create_subsystem", 00:13:27.771 "req_id": 1 00:13:27.771 } 00:13:27.771 Got JSON-RPC error response 00:13:27.771 response: 00:13:27.771 { 00:13:27.771 "code": -32603, 00:13:27.771 "message": "Unable to find target foobar" 00:13:27.771 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9600 00:13:27.771 [2024-11-26 19:14:50.707678] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9600: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:27.771 { 00:13:27.771 "nqn": "nqn.2016-06.io.spdk:cnode9600", 00:13:27.771 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:27.771 "method": "nvmf_create_subsystem", 00:13:27.771 "req_id": 1 00:13:27.771 } 00:13:27.771 Got JSON-RPC error response 00:13:27.771 response: 00:13:27.771 { 00:13:27.771 "code": -32602, 00:13:27.771 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:27.771 }' 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:27.771 { 00:13:27.771 "nqn": "nqn.2016-06.io.spdk:cnode9600", 00:13:27.771 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:27.771 "method": "nvmf_create_subsystem", 00:13:27.771 "req_id": 1 00:13:27.771 } 00:13:27.771 Got JSON-RPC error response 00:13:27.771 response: 00:13:27.771 { 00:13:27.771 "code": -32602, 00:13:27.771 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:27.771 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:27.771 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30441 00:13:28.029 [2024-11-26 19:14:50.908321] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30441: invalid model number 'SPDK_Controller' 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:28.029 { 00:13:28.029 "nqn": "nqn.2016-06.io.spdk:cnode30441", 00:13:28.029 "model_number": "SPDK_Controller\u001f", 00:13:28.029 "method": "nvmf_create_subsystem", 00:13:28.029 "req_id": 1 00:13:28.029 } 00:13:28.029 Got JSON-RPC error response 00:13:28.029 response: 00:13:28.029 { 00:13:28.029 "code": -32602, 00:13:28.029 "message": "Invalid MN SPDK_Controller\u001f" 00:13:28.029 }' 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:28.029 { 00:13:28.029 "nqn": "nqn.2016-06.io.spdk:cnode30441", 00:13:28.029 "model_number": "SPDK_Controller\u001f", 00:13:28.029 "method": "nvmf_create_subsystem", 00:13:28.029 "req_id": 1 00:13:28.029 } 00:13:28.029 Got JSON-RPC error response 00:13:28.029 response: 00:13:28.029 { 00:13:28.029 "code": -32602, 00:13:28.029 "message": "Invalid MN SPDK_Controller\u001f" 00:13:28.029 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.029 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:28.030 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ap2^3hzcg?E5ZZ?F]G%F"' 00:13:28.030 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ap2^3hzcg?E5ZZ?F]G%F"' nqn.2016-06.io.spdk:cnode12285 00:13:28.288 [2024-11-26 19:14:51.261535] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12285: invalid serial number 'ap2^3hzcg?E5ZZ?F]G%F"' 00:13:28.288 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:28.288 { 00:13:28.288 "nqn": "nqn.2016-06.io.spdk:cnode12285", 00:13:28.288 "serial_number": "ap2^3hzcg?E5ZZ?F]G%F\"", 00:13:28.288 "method": "nvmf_create_subsystem", 00:13:28.288 "req_id": 1 00:13:28.288 } 00:13:28.288 Got JSON-RPC error response 00:13:28.288 response: 00:13:28.288 { 00:13:28.288 "code": -32602, 00:13:28.288 "message": "Invalid SN ap2^3hzcg?E5ZZ?F]G%F\"" 00:13:28.288 }' 00:13:28.288 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:28.288 { 00:13:28.288 "nqn": "nqn.2016-06.io.spdk:cnode12285", 00:13:28.288 "serial_number": "ap2^3hzcg?E5ZZ?F]G%F\"", 00:13:28.288 "method": "nvmf_create_subsystem", 00:13:28.288 "req_id": 1 00:13:28.288 } 00:13:28.288 Got JSON-RPC error response 00:13:28.288 response: 00:13:28.288 { 00:13:28.288 "code": -32602, 00:13:28.288 "message": "Invalid SN ap2^3hzcg?E5ZZ?F]G%F\"" 00:13:28.288 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.288 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:28.288 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:28.288 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.289 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:28.548 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'e=~'\''8J3)5$20YerW+oTao\e$yOauU!`(7/r%,_p5I' 00:13:28.549 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'e=~'\''8J3)5$20YerW+oTao\e$yOauU!`(7/r%,_p5I' nqn.2016-06.io.spdk:cnode4998 00:13:28.806 [2024-11-26 19:14:51.731072] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4998: invalid model number 'e=~'8J3)5$20YerW+oTao\e$yOauU!`(7/r%,_p5I' 00:13:28.806 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:28.806 { 00:13:28.806 "nqn": "nqn.2016-06.io.spdk:cnode4998", 00:13:28.806 "model_number": "e=~'\''8J3)5$20YerW+oTao\\e$yOauU!`(7/r%,_p5I", 00:13:28.806 "method": "nvmf_create_subsystem", 00:13:28.806 "req_id": 1 00:13:28.806 } 00:13:28.806 Got JSON-RPC error response 00:13:28.806 response: 00:13:28.806 { 00:13:28.806 "code": -32602, 00:13:28.806 "message": "Invalid MN e=~'\''8J3)5$20YerW+oTao\\e$yOauU!`(7/r%,_p5I" 00:13:28.806 }' 00:13:28.806 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:28.806 { 00:13:28.806 "nqn": "nqn.2016-06.io.spdk:cnode4998", 00:13:28.806 "model_number": "e=~'8J3)5$20YerW+oTao\\e$yOauU!`(7/r%,_p5I", 00:13:28.806 "method": "nvmf_create_subsystem", 00:13:28.806 "req_id": 1 00:13:28.806 } 00:13:28.806 Got JSON-RPC error response 00:13:28.806 response: 00:13:28.806 { 00:13:28.806 "code": -32602, 00:13:28.806 "message": "Invalid MN e=~'8J3)5$20YerW+oTao\\e$yOauU!`(7/r%,_p5I" 00:13:28.806 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:28.806 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:29.064 [2024-11-26 19:14:51.923760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.064 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:29.064 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:29.064 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:29.064 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:29.064 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:29.064 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:29.321 [2024-11-26 19:14:52.318305] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:29.322 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:29.322 { 00:13:29.322 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:29.322 "listen_address": { 00:13:29.322 "trtype": "tcp", 00:13:29.322 "traddr": "", 00:13:29.322 "trsvcid": "4421" 00:13:29.322 }, 00:13:29.322 "method": "nvmf_subsystem_remove_listener", 00:13:29.322 "req_id": 1 00:13:29.322 } 00:13:29.322 Got JSON-RPC error response 00:13:29.322 response: 00:13:29.322 { 00:13:29.322 "code": -32602, 00:13:29.322 "message": "Invalid parameters" 00:13:29.322 }' 00:13:29.322 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:29.322 { 00:13:29.322 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:29.322 "listen_address": { 00:13:29.322 "trtype": "tcp", 00:13:29.322 "traddr": "", 00:13:29.322 "trsvcid": "4421" 00:13:29.322 }, 00:13:29.322 "method": "nvmf_subsystem_remove_listener", 00:13:29.322 "req_id": 1 00:13:29.322 } 00:13:29.322 Got JSON-RPC error response 00:13:29.322 response: 00:13:29.322 { 00:13:29.322 "code": -32602, 00:13:29.322 "message": "Invalid parameters" 00:13:29.322 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:29.322 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9767 -i 0 00:13:29.579 [2024-11-26 19:14:52.522963] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9767: invalid cntlid range [0-65519] 00:13:29.579 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:29.579 { 00:13:29.579 "nqn": "nqn.2016-06.io.spdk:cnode9767", 00:13:29.579 "min_cntlid": 0, 00:13:29.579 "method": "nvmf_create_subsystem", 00:13:29.579 "req_id": 1 00:13:29.579 } 00:13:29.579 Got JSON-RPC error response 00:13:29.579 response: 00:13:29.579 { 00:13:29.579 "code": -32602, 00:13:29.579 "message": "Invalid cntlid range [0-65519]" 00:13:29.579 }' 00:13:29.579 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:29.579 { 00:13:29.579 "nqn": "nqn.2016-06.io.spdk:cnode9767", 00:13:29.579 "min_cntlid": 0, 00:13:29.579 "method": "nvmf_create_subsystem", 00:13:29.579 "req_id": 1 00:13:29.579 } 00:13:29.579 Got JSON-RPC error response 00:13:29.579 response: 00:13:29.579 { 00:13:29.579 "code": -32602, 00:13:29.579 "message": "Invalid cntlid range [0-65519]" 00:13:29.579 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.579 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12251 -i 65520 00:13:29.836 [2024-11-26 19:14:52.731697] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12251: invalid cntlid range [65520-65519] 00:13:29.836 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:29.836 { 00:13:29.836 "nqn": "nqn.2016-06.io.spdk:cnode12251", 00:13:29.836 "min_cntlid": 65520, 00:13:29.836 "method": "nvmf_create_subsystem", 00:13:29.836 "req_id": 1 00:13:29.836 } 00:13:29.836 Got JSON-RPC error response 00:13:29.836 response: 00:13:29.836 { 00:13:29.836 "code": -32602, 00:13:29.836 "message": "Invalid cntlid range [65520-65519]" 00:13:29.836 }' 00:13:29.836 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:29.836 { 00:13:29.836 "nqn": "nqn.2016-06.io.spdk:cnode12251", 00:13:29.836 "min_cntlid": 65520, 00:13:29.836 "method": "nvmf_create_subsystem", 00:13:29.836 "req_id": 1 00:13:29.836 } 00:13:29.836 Got JSON-RPC error response 00:13:29.836 response: 00:13:29.836 { 00:13:29.836 "code": -32602, 00:13:29.836 "message": "Invalid cntlid range [65520-65519]" 00:13:29.836 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.836 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8745 -I 0 00:13:29.836 [2024-11-26 19:14:52.920314] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8745: invalid cntlid range [1-0] 00:13:30.094 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:30.094 { 00:13:30.094 "nqn": "nqn.2016-06.io.spdk:cnode8745", 00:13:30.094 "max_cntlid": 0, 00:13:30.094 "method": "nvmf_create_subsystem", 00:13:30.094 "req_id": 1 00:13:30.094 } 00:13:30.094 Got JSON-RPC error response 00:13:30.094 response: 00:13:30.094 { 00:13:30.094 "code": -32602, 00:13:30.094 "message": "Invalid cntlid range [1-0]" 00:13:30.094 }' 00:13:30.094 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:30.094 { 00:13:30.094 "nqn": "nqn.2016-06.io.spdk:cnode8745", 00:13:30.094 "max_cntlid": 0, 00:13:30.094 "method": "nvmf_create_subsystem", 00:13:30.094 "req_id": 1 00:13:30.094 } 00:13:30.094 Got JSON-RPC error response 00:13:30.094 response: 00:13:30.094 { 00:13:30.094 "code": -32602, 00:13:30.094 "message": "Invalid cntlid range [1-0]" 00:13:30.094 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.094 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22909 -I 65520 00:13:30.094 [2024-11-26 19:14:53.120993] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22909: invalid cntlid range [1-65520] 00:13:30.094 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:30.094 { 00:13:30.094 "nqn": "nqn.2016-06.io.spdk:cnode22909", 00:13:30.094 "max_cntlid": 65520, 00:13:30.094 "method": "nvmf_create_subsystem", 00:13:30.094 "req_id": 1 00:13:30.094 } 00:13:30.094 Got JSON-RPC error response 00:13:30.094 response: 00:13:30.094 { 00:13:30.094 "code": -32602, 00:13:30.094 "message": "Invalid cntlid range [1-65520]" 00:13:30.094 }' 00:13:30.094 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:30.094 { 00:13:30.094 "nqn": "nqn.2016-06.io.spdk:cnode22909", 00:13:30.094 "max_cntlid": 65520, 00:13:30.094 "method": "nvmf_create_subsystem", 00:13:30.094 "req_id": 1 00:13:30.094 } 00:13:30.094 Got JSON-RPC error response 00:13:30.094 response: 00:13:30.094 { 00:13:30.094 "code": -32602, 00:13:30.094 "message": "Invalid cntlid range [1-65520]" 00:13:30.094 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.094 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23057 -i 6 -I 5 00:13:30.352 [2024-11-26 19:14:53.317696] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23057: invalid cntlid range [6-5] 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:30.352 { 00:13:30.352 "nqn": "nqn.2016-06.io.spdk:cnode23057", 00:13:30.352 "min_cntlid": 6, 00:13:30.352 "max_cntlid": 5, 00:13:30.352 "method": "nvmf_create_subsystem", 00:13:30.352 "req_id": 1 00:13:30.352 } 00:13:30.352 Got JSON-RPC error response 00:13:30.352 response: 00:13:30.352 { 00:13:30.352 "code": -32602, 00:13:30.352 "message": "Invalid cntlid range [6-5]" 00:13:30.352 }' 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:30.352 { 00:13:30.352 "nqn": "nqn.2016-06.io.spdk:cnode23057", 00:13:30.352 "min_cntlid": 6, 00:13:30.352 "max_cntlid": 5, 00:13:30.352 "method": "nvmf_create_subsystem", 00:13:30.352 "req_id": 1 00:13:30.352 } 00:13:30.352 Got JSON-RPC error response 00:13:30.352 response: 00:13:30.352 { 00:13:30.352 "code": -32602, 00:13:30.352 "message": "Invalid cntlid range [6-5]" 00:13:30.352 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:30.352 { 00:13:30.352 "name": "foobar", 00:13:30.352 "method": "nvmf_delete_target", 00:13:30.352 "req_id": 1 00:13:30.352 } 00:13:30.352 Got JSON-RPC error response 00:13:30.352 response: 00:13:30.352 { 00:13:30.352 "code": -32602, 00:13:30.352 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:30.352 }' 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:30.352 { 00:13:30.352 "name": "foobar", 00:13:30.352 "method": "nvmf_delete_target", 00:13:30.352 "req_id": 1 00:13:30.352 } 00:13:30.352 Got JSON-RPC error response 00:13:30.352 response: 00:13:30.352 { 00:13:30.352 "code": -32602, 00:13:30.352 "message": "The specified target doesn't exist, cannot delete it." 00:13:30.352 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.352 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.352 rmmod nvme_tcp 00:13:30.611 rmmod nvme_fabrics 00:13:30.611 rmmod nvme_keyring 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3683087 ']' 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3683087 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3683087 ']' 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3683087 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3683087 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3683087' 00:13:30.611 killing process with pid 3683087 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3683087 00:13:30.611 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3683087 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.869 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.772 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:32.772 00:13:32.772 real 0m11.967s 00:13:32.772 user 0m18.340s 00:13:32.772 sys 0m5.396s 00:13:32.772 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.772 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.772 ************************************ 00:13:32.772 END TEST nvmf_invalid 00:13:32.772 ************************************ 00:13:32.772 19:14:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:32.772 19:14:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.772 19:14:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.772 19:14:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.772 ************************************ 00:13:32.772 START TEST nvmf_connect_stress 00:13:32.772 ************************************ 00:13:33.031 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:33.031 * Looking for test storage... 00:13:33.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.031 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:33.031 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:33.031 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:33.031 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:33.031 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:33.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.032 --rc genhtml_branch_coverage=1 00:13:33.032 --rc genhtml_function_coverage=1 00:13:33.032 --rc genhtml_legend=1 00:13:33.032 --rc geninfo_all_blocks=1 00:13:33.032 --rc geninfo_unexecuted_blocks=1 00:13:33.032 00:13:33.032 ' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:33.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.032 --rc genhtml_branch_coverage=1 00:13:33.032 --rc genhtml_function_coverage=1 00:13:33.032 --rc genhtml_legend=1 00:13:33.032 --rc geninfo_all_blocks=1 00:13:33.032 --rc geninfo_unexecuted_blocks=1 00:13:33.032 00:13:33.032 ' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:33.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.032 --rc genhtml_branch_coverage=1 00:13:33.032 --rc genhtml_function_coverage=1 00:13:33.032 --rc genhtml_legend=1 00:13:33.032 --rc geninfo_all_blocks=1 00:13:33.032 --rc geninfo_unexecuted_blocks=1 00:13:33.032 00:13:33.032 ' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:33.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.032 --rc genhtml_branch_coverage=1 00:13:33.032 --rc genhtml_function_coverage=1 00:13:33.032 --rc genhtml_legend=1 00:13:33.032 --rc geninfo_all_blocks=1 00:13:33.032 --rc geninfo_unexecuted_blocks=1 00:13:33.032 00:13:33.032 ' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:33.032 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:33.033 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.033 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.033 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.033 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:33.033 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:33.033 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:33.033 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:39.600 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:39.600 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.600 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:39.601 Found net devices under 0000:86:00.0: cvl_0_0 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:39.601 Found net devices under 0000:86:00.1: cvl_0_1 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:39.601 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:39.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:13:39.601 00:13:39.601 --- 10.0.0.2 ping statistics --- 00:13:39.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.601 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:13:39.601 00:13:39.601 --- 10.0.0.1 ping statistics --- 00:13:39.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.601 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3687430 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3687430 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3687430 ']' 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.601 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.601 [2024-11-26 19:15:02.163973] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:13:39.601 [2024-11-26 19:15:02.164014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.601 [2024-11-26 19:15:02.242559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:39.601 [2024-11-26 19:15:02.288780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.601 [2024-11-26 19:15:02.288813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.601 [2024-11-26 19:15:02.288821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.601 [2024-11-26 19:15:02.288828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.601 [2024-11-26 19:15:02.288834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.601 [2024-11-26 19:15:02.290306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.601 [2024-11-26 19:15:02.290415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.601 [2024-11-26 19:15:02.290416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.167 [2024-11-26 19:15:03.044838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.167 [2024-11-26 19:15:03.065046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.167 NULL1 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3687715 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.167 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.425 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.425 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:40.425 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.425 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.425 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.989 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.989 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:40.989 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.989 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.989 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.246 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.246 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:41.246 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.246 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.246 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.503 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.503 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:41.503 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.503 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.503 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.761 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.761 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:41.761 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.761 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.761 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.017 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.017 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:42.017 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.017 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.017 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.582 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.582 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:42.582 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.582 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.582 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.839 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.839 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:42.839 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.839 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.839 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.096 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.096 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:43.096 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.096 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.096 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.353 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.353 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:43.353 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.353 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.353 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.917 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:43.917 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.917 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.917 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.174 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.174 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:44.174 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.174 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.174 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.431 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.431 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:44.431 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.431 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.431 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.687 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.687 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:44.687 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.688 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.688 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.944 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.945 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:44.945 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.945 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.945 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.508 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.508 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:45.508 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.508 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.508 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.765 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.765 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:45.765 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.765 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.765 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.021 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.021 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:46.021 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.021 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.021 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.276 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.276 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:46.276 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.276 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.276 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.838 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.838 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:46.838 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.838 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.838 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.094 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.094 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:47.094 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.094 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.094 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.350 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.350 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:47.350 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.350 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.350 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.607 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.607 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:47.607 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.607 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.607 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.863 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.863 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:47.863 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.120 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.120 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.376 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.377 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:48.377 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.377 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.377 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.633 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.633 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:48.633 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.633 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.633 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.890 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.890 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:48.890 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.890 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.890 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.453 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.453 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:49.453 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.453 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.453 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.751 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.751 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:49.751 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.751 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.751 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.019 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.019 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:50.019 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.019 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.019 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.297 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687715 00:13:50.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3687715) - No such process 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3687715 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:50.297 rmmod nvme_tcp 00:13:50.297 rmmod nvme_fabrics 00:13:50.297 rmmod nvme_keyring 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3687430 ']' 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3687430 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3687430 ']' 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3687430 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3687430 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3687430' 00:13:50.297 killing process with pid 3687430 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3687430 00:13:50.297 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3687430 00:13:50.582 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.582 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.583 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.576 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.577 00:13:52.577 real 0m19.730s 00:13:52.577 user 0m41.409s 00:13:52.577 sys 0m8.692s 00:13:52.577 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.577 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.577 ************************************ 00:13:52.577 END TEST nvmf_connect_stress 00:13:52.577 ************************************ 00:13:52.577 19:15:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:52.577 19:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.577 19:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.577 19:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.577 ************************************ 00:13:52.577 START TEST nvmf_fused_ordering 00:13:52.577 ************************************ 00:13:52.577 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:52.836 * Looking for test storage... 00:13:52.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:52.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.836 --rc genhtml_branch_coverage=1 00:13:52.836 --rc genhtml_function_coverage=1 00:13:52.836 --rc genhtml_legend=1 00:13:52.836 --rc geninfo_all_blocks=1 00:13:52.836 --rc geninfo_unexecuted_blocks=1 00:13:52.836 00:13:52.836 ' 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:52.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.836 --rc genhtml_branch_coverage=1 00:13:52.836 --rc genhtml_function_coverage=1 00:13:52.836 --rc genhtml_legend=1 00:13:52.836 --rc geninfo_all_blocks=1 00:13:52.836 --rc geninfo_unexecuted_blocks=1 00:13:52.836 00:13:52.836 ' 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:52.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.836 --rc genhtml_branch_coverage=1 00:13:52.836 --rc genhtml_function_coverage=1 00:13:52.836 --rc genhtml_legend=1 00:13:52.836 --rc geninfo_all_blocks=1 00:13:52.836 --rc geninfo_unexecuted_blocks=1 00:13:52.836 00:13:52.836 ' 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:52.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.836 --rc genhtml_branch_coverage=1 00:13:52.836 --rc genhtml_function_coverage=1 00:13:52.836 --rc genhtml_legend=1 00:13:52.836 --rc geninfo_all_blocks=1 00:13:52.836 --rc geninfo_unexecuted_blocks=1 00:13:52.836 00:13:52.836 ' 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:52.836 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.837 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:59.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:59.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:59.406 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:59.407 Found net devices under 0000:86:00.0: cvl_0_0 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:59.407 Found net devices under 0000:86:00.1: cvl_0_1 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:59.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:13:59.407 00:13:59.407 --- 10.0.0.2 ping statistics --- 00:13:59.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.407 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:13:59.407 00:13:59.407 --- 10.0.0.1 ping statistics --- 00:13:59.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.407 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3693454 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3693454 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3693454 ']' 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.407 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.407 [2024-11-26 19:15:21.960044] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:13:59.407 [2024-11-26 19:15:21.960094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.407 [2024-11-26 19:15:22.038326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.407 [2024-11-26 19:15:22.081121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.407 [2024-11-26 19:15:22.081155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.407 [2024-11-26 19:15:22.081162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.407 [2024-11-26 19:15:22.081168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.407 [2024-11-26 19:15:22.081173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.407 [2024-11-26 19:15:22.081723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.407 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.407 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:59.407 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.407 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:59.407 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.407 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.407 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.408 [2024-11-26 19:15:22.226154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.408 [2024-11-26 19:15:22.250364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.408 NULL1 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.408 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:59.408 [2024-11-26 19:15:22.308765] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:13:59.408 [2024-11-26 19:15:22.308797] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3693481 ] 00:13:59.666 Attached to nqn.2016-06.io.spdk:cnode1 00:13:59.666 Namespace ID: 1 size: 1GB 00:13:59.666 fused_ordering(0) 00:13:59.666 fused_ordering(1) 00:13:59.666 fused_ordering(2) 00:13:59.666 fused_ordering(3) 00:13:59.666 fused_ordering(4) 00:13:59.666 fused_ordering(5) 00:13:59.666 fused_ordering(6) 00:13:59.666 fused_ordering(7) 00:13:59.666 fused_ordering(8) 00:13:59.666 fused_ordering(9) 00:13:59.666 fused_ordering(10) 00:13:59.666 fused_ordering(11) 00:13:59.666 fused_ordering(12) 00:13:59.666 fused_ordering(13) 00:13:59.666 fused_ordering(14) 00:13:59.666 fused_ordering(15) 00:13:59.666 fused_ordering(16) 00:13:59.666 fused_ordering(17) 00:13:59.666 fused_ordering(18) 00:13:59.666 fused_ordering(19) 00:13:59.666 fused_ordering(20) 00:13:59.666 fused_ordering(21) 00:13:59.666 fused_ordering(22) 00:13:59.666 fused_ordering(23) 00:13:59.666 fused_ordering(24) 00:13:59.666 fused_ordering(25) 00:13:59.666 fused_ordering(26) 00:13:59.666 fused_ordering(27) 00:13:59.666 fused_ordering(28) 00:13:59.666 fused_ordering(29) 00:13:59.666 fused_ordering(30) 00:13:59.666 fused_ordering(31) 00:13:59.666 fused_ordering(32) 00:13:59.666 fused_ordering(33) 00:13:59.666 fused_ordering(34) 00:13:59.666 fused_ordering(35) 00:13:59.666 fused_ordering(36) 00:13:59.666 fused_ordering(37) 00:13:59.666 fused_ordering(38) 00:13:59.666 fused_ordering(39) 00:13:59.666 fused_ordering(40) 00:13:59.666 fused_ordering(41) 00:13:59.666 fused_ordering(42) 00:13:59.666 fused_ordering(43) 00:13:59.666 fused_ordering(44) 00:13:59.666 fused_ordering(45) 00:13:59.666 fused_ordering(46) 00:13:59.666 fused_ordering(47) 00:13:59.666 fused_ordering(48) 00:13:59.666 fused_ordering(49) 00:13:59.666 fused_ordering(50) 00:13:59.666 fused_ordering(51) 00:13:59.666 fused_ordering(52) 00:13:59.666 fused_ordering(53) 00:13:59.666 fused_ordering(54) 00:13:59.666 fused_ordering(55) 00:13:59.666 fused_ordering(56) 00:13:59.666 fused_ordering(57) 00:13:59.666 fused_ordering(58) 00:13:59.666 fused_ordering(59) 00:13:59.666 fused_ordering(60) 00:13:59.666 fused_ordering(61) 00:13:59.666 fused_ordering(62) 00:13:59.666 fused_ordering(63) 00:13:59.666 fused_ordering(64) 00:13:59.666 fused_ordering(65) 00:13:59.666 fused_ordering(66) 00:13:59.666 fused_ordering(67) 00:13:59.666 fused_ordering(68) 00:13:59.666 fused_ordering(69) 00:13:59.666 fused_ordering(70) 00:13:59.666 fused_ordering(71) 00:13:59.666 fused_ordering(72) 00:13:59.666 fused_ordering(73) 00:13:59.666 fused_ordering(74) 00:13:59.666 fused_ordering(75) 00:13:59.666 fused_ordering(76) 00:13:59.666 fused_ordering(77) 00:13:59.666 fused_ordering(78) 00:13:59.666 fused_ordering(79) 00:13:59.666 fused_ordering(80) 00:13:59.666 fused_ordering(81) 00:13:59.666 fused_ordering(82) 00:13:59.666 fused_ordering(83) 00:13:59.666 fused_ordering(84) 00:13:59.666 fused_ordering(85) 00:13:59.666 fused_ordering(86) 00:13:59.666 fused_ordering(87) 00:13:59.666 fused_ordering(88) 00:13:59.666 fused_ordering(89) 00:13:59.666 fused_ordering(90) 00:13:59.666 fused_ordering(91) 00:13:59.666 fused_ordering(92) 00:13:59.666 fused_ordering(93) 00:13:59.666 fused_ordering(94) 00:13:59.666 fused_ordering(95) 00:13:59.666 fused_ordering(96) 00:13:59.666 fused_ordering(97) 00:13:59.666 fused_ordering(98) 00:13:59.666 fused_ordering(99) 00:13:59.666 fused_ordering(100) 00:13:59.666 fused_ordering(101) 00:13:59.666 fused_ordering(102) 00:13:59.666 fused_ordering(103) 00:13:59.666 fused_ordering(104) 00:13:59.666 fused_ordering(105) 00:13:59.666 fused_ordering(106) 00:13:59.666 fused_ordering(107) 00:13:59.666 fused_ordering(108) 00:13:59.666 fused_ordering(109) 00:13:59.666 fused_ordering(110) 00:13:59.666 fused_ordering(111) 00:13:59.666 fused_ordering(112) 00:13:59.666 fused_ordering(113) 00:13:59.666 fused_ordering(114) 00:13:59.666 fused_ordering(115) 00:13:59.666 fused_ordering(116) 00:13:59.666 fused_ordering(117) 00:13:59.666 fused_ordering(118) 00:13:59.666 fused_ordering(119) 00:13:59.666 fused_ordering(120) 00:13:59.666 fused_ordering(121) 00:13:59.666 fused_ordering(122) 00:13:59.666 fused_ordering(123) 00:13:59.666 fused_ordering(124) 00:13:59.666 fused_ordering(125) 00:13:59.666 fused_ordering(126) 00:13:59.666 fused_ordering(127) 00:13:59.666 fused_ordering(128) 00:13:59.666 fused_ordering(129) 00:13:59.666 fused_ordering(130) 00:13:59.666 fused_ordering(131) 00:13:59.666 fused_ordering(132) 00:13:59.666 fused_ordering(133) 00:13:59.666 fused_ordering(134) 00:13:59.666 fused_ordering(135) 00:13:59.666 fused_ordering(136) 00:13:59.666 fused_ordering(137) 00:13:59.666 fused_ordering(138) 00:13:59.666 fused_ordering(139) 00:13:59.666 fused_ordering(140) 00:13:59.666 fused_ordering(141) 00:13:59.666 fused_ordering(142) 00:13:59.666 fused_ordering(143) 00:13:59.666 fused_ordering(144) 00:13:59.666 fused_ordering(145) 00:13:59.666 fused_ordering(146) 00:13:59.666 fused_ordering(147) 00:13:59.666 fused_ordering(148) 00:13:59.666 fused_ordering(149) 00:13:59.666 fused_ordering(150) 00:13:59.666 fused_ordering(151) 00:13:59.666 fused_ordering(152) 00:13:59.666 fused_ordering(153) 00:13:59.666 fused_ordering(154) 00:13:59.666 fused_ordering(155) 00:13:59.666 fused_ordering(156) 00:13:59.666 fused_ordering(157) 00:13:59.666 fused_ordering(158) 00:13:59.666 fused_ordering(159) 00:13:59.666 fused_ordering(160) 00:13:59.666 fused_ordering(161) 00:13:59.666 fused_ordering(162) 00:13:59.666 fused_ordering(163) 00:13:59.666 fused_ordering(164) 00:13:59.666 fused_ordering(165) 00:13:59.666 fused_ordering(166) 00:13:59.666 fused_ordering(167) 00:13:59.666 fused_ordering(168) 00:13:59.666 fused_ordering(169) 00:13:59.666 fused_ordering(170) 00:13:59.666 fused_ordering(171) 00:13:59.667 fused_ordering(172) 00:13:59.667 fused_ordering(173) 00:13:59.667 fused_ordering(174) 00:13:59.667 fused_ordering(175) 00:13:59.667 fused_ordering(176) 00:13:59.667 fused_ordering(177) 00:13:59.667 fused_ordering(178) 00:13:59.667 fused_ordering(179) 00:13:59.667 fused_ordering(180) 00:13:59.667 fused_ordering(181) 00:13:59.667 fused_ordering(182) 00:13:59.667 fused_ordering(183) 00:13:59.667 fused_ordering(184) 00:13:59.667 fused_ordering(185) 00:13:59.667 fused_ordering(186) 00:13:59.667 fused_ordering(187) 00:13:59.667 fused_ordering(188) 00:13:59.667 fused_ordering(189) 00:13:59.667 fused_ordering(190) 00:13:59.667 fused_ordering(191) 00:13:59.667 fused_ordering(192) 00:13:59.667 fused_ordering(193) 00:13:59.667 fused_ordering(194) 00:13:59.667 fused_ordering(195) 00:13:59.667 fused_ordering(196) 00:13:59.667 fused_ordering(197) 00:13:59.667 fused_ordering(198) 00:13:59.667 fused_ordering(199) 00:13:59.667 fused_ordering(200) 00:13:59.667 fused_ordering(201) 00:13:59.667 fused_ordering(202) 00:13:59.667 fused_ordering(203) 00:13:59.667 fused_ordering(204) 00:13:59.667 fused_ordering(205) 00:13:59.925 fused_ordering(206) 00:13:59.925 fused_ordering(207) 00:13:59.925 fused_ordering(208) 00:13:59.925 fused_ordering(209) 00:13:59.925 fused_ordering(210) 00:13:59.925 fused_ordering(211) 00:13:59.925 fused_ordering(212) 00:13:59.925 fused_ordering(213) 00:13:59.925 fused_ordering(214) 00:13:59.925 fused_ordering(215) 00:13:59.925 fused_ordering(216) 00:13:59.925 fused_ordering(217) 00:13:59.925 fused_ordering(218) 00:13:59.925 fused_ordering(219) 00:13:59.925 fused_ordering(220) 00:13:59.925 fused_ordering(221) 00:13:59.925 fused_ordering(222) 00:13:59.925 fused_ordering(223) 00:13:59.925 fused_ordering(224) 00:13:59.925 fused_ordering(225) 00:13:59.925 fused_ordering(226) 00:13:59.925 fused_ordering(227) 00:13:59.925 fused_ordering(228) 00:13:59.925 fused_ordering(229) 00:13:59.925 fused_ordering(230) 00:13:59.925 fused_ordering(231) 00:13:59.925 fused_ordering(232) 00:13:59.925 fused_ordering(233) 00:13:59.925 fused_ordering(234) 00:13:59.925 fused_ordering(235) 00:13:59.925 fused_ordering(236) 00:13:59.925 fused_ordering(237) 00:13:59.925 fused_ordering(238) 00:13:59.925 fused_ordering(239) 00:13:59.925 fused_ordering(240) 00:13:59.925 fused_ordering(241) 00:13:59.925 fused_ordering(242) 00:13:59.925 fused_ordering(243) 00:13:59.925 fused_ordering(244) 00:13:59.925 fused_ordering(245) 00:13:59.925 fused_ordering(246) 00:13:59.925 fused_ordering(247) 00:13:59.925 fused_ordering(248) 00:13:59.925 fused_ordering(249) 00:13:59.925 fused_ordering(250) 00:13:59.925 fused_ordering(251) 00:13:59.925 fused_ordering(252) 00:13:59.925 fused_ordering(253) 00:13:59.925 fused_ordering(254) 00:13:59.925 fused_ordering(255) 00:13:59.925 fused_ordering(256) 00:13:59.925 fused_ordering(257) 00:13:59.925 fused_ordering(258) 00:13:59.925 fused_ordering(259) 00:13:59.925 fused_ordering(260) 00:13:59.925 fused_ordering(261) 00:13:59.925 fused_ordering(262) 00:13:59.925 fused_ordering(263) 00:13:59.925 fused_ordering(264) 00:13:59.925 fused_ordering(265) 00:13:59.925 fused_ordering(266) 00:13:59.925 fused_ordering(267) 00:13:59.925 fused_ordering(268) 00:13:59.925 fused_ordering(269) 00:13:59.925 fused_ordering(270) 00:13:59.925 fused_ordering(271) 00:13:59.925 fused_ordering(272) 00:13:59.925 fused_ordering(273) 00:13:59.925 fused_ordering(274) 00:13:59.925 fused_ordering(275) 00:13:59.925 fused_ordering(276) 00:13:59.925 fused_ordering(277) 00:13:59.925 fused_ordering(278) 00:13:59.925 fused_ordering(279) 00:13:59.925 fused_ordering(280) 00:13:59.925 fused_ordering(281) 00:13:59.925 fused_ordering(282) 00:13:59.925 fused_ordering(283) 00:13:59.925 fused_ordering(284) 00:13:59.925 fused_ordering(285) 00:13:59.925 fused_ordering(286) 00:13:59.925 fused_ordering(287) 00:13:59.925 fused_ordering(288) 00:13:59.925 fused_ordering(289) 00:13:59.925 fused_ordering(290) 00:13:59.925 fused_ordering(291) 00:13:59.925 fused_ordering(292) 00:13:59.925 fused_ordering(293) 00:13:59.925 fused_ordering(294) 00:13:59.925 fused_ordering(295) 00:13:59.925 fused_ordering(296) 00:13:59.925 fused_ordering(297) 00:13:59.925 fused_ordering(298) 00:13:59.925 fused_ordering(299) 00:13:59.925 fused_ordering(300) 00:13:59.925 fused_ordering(301) 00:13:59.925 fused_ordering(302) 00:13:59.925 fused_ordering(303) 00:13:59.925 fused_ordering(304) 00:13:59.925 fused_ordering(305) 00:13:59.925 fused_ordering(306) 00:13:59.925 fused_ordering(307) 00:13:59.925 fused_ordering(308) 00:13:59.925 fused_ordering(309) 00:13:59.925 fused_ordering(310) 00:13:59.925 fused_ordering(311) 00:13:59.925 fused_ordering(312) 00:13:59.925 fused_ordering(313) 00:13:59.925 fused_ordering(314) 00:13:59.925 fused_ordering(315) 00:13:59.925 fused_ordering(316) 00:13:59.925 fused_ordering(317) 00:13:59.925 fused_ordering(318) 00:13:59.925 fused_ordering(319) 00:13:59.925 fused_ordering(320) 00:13:59.925 fused_ordering(321) 00:13:59.925 fused_ordering(322) 00:13:59.925 fused_ordering(323) 00:13:59.925 fused_ordering(324) 00:13:59.925 fused_ordering(325) 00:13:59.925 fused_ordering(326) 00:13:59.925 fused_ordering(327) 00:13:59.925 fused_ordering(328) 00:13:59.925 fused_ordering(329) 00:13:59.925 fused_ordering(330) 00:13:59.925 fused_ordering(331) 00:13:59.925 fused_ordering(332) 00:13:59.925 fused_ordering(333) 00:13:59.925 fused_ordering(334) 00:13:59.925 fused_ordering(335) 00:13:59.925 fused_ordering(336) 00:13:59.925 fused_ordering(337) 00:13:59.925 fused_ordering(338) 00:13:59.925 fused_ordering(339) 00:13:59.925 fused_ordering(340) 00:13:59.925 fused_ordering(341) 00:13:59.925 fused_ordering(342) 00:13:59.925 fused_ordering(343) 00:13:59.925 fused_ordering(344) 00:13:59.925 fused_ordering(345) 00:13:59.925 fused_ordering(346) 00:13:59.925 fused_ordering(347) 00:13:59.925 fused_ordering(348) 00:13:59.925 fused_ordering(349) 00:13:59.925 fused_ordering(350) 00:13:59.925 fused_ordering(351) 00:13:59.925 fused_ordering(352) 00:13:59.925 fused_ordering(353) 00:13:59.925 fused_ordering(354) 00:13:59.925 fused_ordering(355) 00:13:59.925 fused_ordering(356) 00:13:59.925 fused_ordering(357) 00:13:59.925 fused_ordering(358) 00:13:59.925 fused_ordering(359) 00:13:59.925 fused_ordering(360) 00:13:59.925 fused_ordering(361) 00:13:59.925 fused_ordering(362) 00:13:59.925 fused_ordering(363) 00:13:59.925 fused_ordering(364) 00:13:59.925 fused_ordering(365) 00:13:59.925 fused_ordering(366) 00:13:59.925 fused_ordering(367) 00:13:59.925 fused_ordering(368) 00:13:59.925 fused_ordering(369) 00:13:59.925 fused_ordering(370) 00:13:59.925 fused_ordering(371) 00:13:59.925 fused_ordering(372) 00:13:59.925 fused_ordering(373) 00:13:59.925 fused_ordering(374) 00:13:59.925 fused_ordering(375) 00:13:59.925 fused_ordering(376) 00:13:59.925 fused_ordering(377) 00:13:59.925 fused_ordering(378) 00:13:59.925 fused_ordering(379) 00:13:59.925 fused_ordering(380) 00:13:59.925 fused_ordering(381) 00:13:59.925 fused_ordering(382) 00:13:59.925 fused_ordering(383) 00:13:59.925 fused_ordering(384) 00:13:59.925 fused_ordering(385) 00:13:59.925 fused_ordering(386) 00:13:59.925 fused_ordering(387) 00:13:59.925 fused_ordering(388) 00:13:59.925 fused_ordering(389) 00:13:59.925 fused_ordering(390) 00:13:59.925 fused_ordering(391) 00:13:59.925 fused_ordering(392) 00:13:59.925 fused_ordering(393) 00:13:59.925 fused_ordering(394) 00:13:59.925 fused_ordering(395) 00:13:59.925 fused_ordering(396) 00:13:59.925 fused_ordering(397) 00:13:59.925 fused_ordering(398) 00:13:59.925 fused_ordering(399) 00:13:59.925 fused_ordering(400) 00:13:59.925 fused_ordering(401) 00:13:59.925 fused_ordering(402) 00:13:59.925 fused_ordering(403) 00:13:59.925 fused_ordering(404) 00:13:59.925 fused_ordering(405) 00:13:59.925 fused_ordering(406) 00:13:59.925 fused_ordering(407) 00:13:59.925 fused_ordering(408) 00:13:59.926 fused_ordering(409) 00:13:59.926 fused_ordering(410) 00:14:00.184 fused_ordering(411) 00:14:00.184 fused_ordering(412) 00:14:00.184 fused_ordering(413) 00:14:00.184 fused_ordering(414) 00:14:00.184 fused_ordering(415) 00:14:00.184 fused_ordering(416) 00:14:00.184 fused_ordering(417) 00:14:00.184 fused_ordering(418) 00:14:00.184 fused_ordering(419) 00:14:00.184 fused_ordering(420) 00:14:00.184 fused_ordering(421) 00:14:00.184 fused_ordering(422) 00:14:00.184 fused_ordering(423) 00:14:00.184 fused_ordering(424) 00:14:00.184 fused_ordering(425) 00:14:00.184 fused_ordering(426) 00:14:00.184 fused_ordering(427) 00:14:00.184 fused_ordering(428) 00:14:00.184 fused_ordering(429) 00:14:00.184 fused_ordering(430) 00:14:00.184 fused_ordering(431) 00:14:00.184 fused_ordering(432) 00:14:00.184 fused_ordering(433) 00:14:00.184 fused_ordering(434) 00:14:00.184 fused_ordering(435) 00:14:00.184 fused_ordering(436) 00:14:00.184 fused_ordering(437) 00:14:00.184 fused_ordering(438) 00:14:00.184 fused_ordering(439) 00:14:00.184 fused_ordering(440) 00:14:00.184 fused_ordering(441) 00:14:00.184 fused_ordering(442) 00:14:00.184 fused_ordering(443) 00:14:00.184 fused_ordering(444) 00:14:00.184 fused_ordering(445) 00:14:00.184 fused_ordering(446) 00:14:00.184 fused_ordering(447) 00:14:00.184 fused_ordering(448) 00:14:00.184 fused_ordering(449) 00:14:00.184 fused_ordering(450) 00:14:00.184 fused_ordering(451) 00:14:00.184 fused_ordering(452) 00:14:00.184 fused_ordering(453) 00:14:00.184 fused_ordering(454) 00:14:00.184 fused_ordering(455) 00:14:00.184 fused_ordering(456) 00:14:00.184 fused_ordering(457) 00:14:00.184 fused_ordering(458) 00:14:00.184 fused_ordering(459) 00:14:00.184 fused_ordering(460) 00:14:00.184 fused_ordering(461) 00:14:00.184 fused_ordering(462) 00:14:00.184 fused_ordering(463) 00:14:00.184 fused_ordering(464) 00:14:00.184 fused_ordering(465) 00:14:00.184 fused_ordering(466) 00:14:00.184 fused_ordering(467) 00:14:00.184 fused_ordering(468) 00:14:00.184 fused_ordering(469) 00:14:00.184 fused_ordering(470) 00:14:00.184 fused_ordering(471) 00:14:00.184 fused_ordering(472) 00:14:00.184 fused_ordering(473) 00:14:00.184 fused_ordering(474) 00:14:00.184 fused_ordering(475) 00:14:00.184 fused_ordering(476) 00:14:00.184 fused_ordering(477) 00:14:00.184 fused_ordering(478) 00:14:00.184 fused_ordering(479) 00:14:00.184 fused_ordering(480) 00:14:00.184 fused_ordering(481) 00:14:00.184 fused_ordering(482) 00:14:00.184 fused_ordering(483) 00:14:00.184 fused_ordering(484) 00:14:00.184 fused_ordering(485) 00:14:00.184 fused_ordering(486) 00:14:00.184 fused_ordering(487) 00:14:00.184 fused_ordering(488) 00:14:00.184 fused_ordering(489) 00:14:00.184 fused_ordering(490) 00:14:00.184 fused_ordering(491) 00:14:00.184 fused_ordering(492) 00:14:00.184 fused_ordering(493) 00:14:00.184 fused_ordering(494) 00:14:00.184 fused_ordering(495) 00:14:00.184 fused_ordering(496) 00:14:00.184 fused_ordering(497) 00:14:00.184 fused_ordering(498) 00:14:00.184 fused_ordering(499) 00:14:00.184 fused_ordering(500) 00:14:00.184 fused_ordering(501) 00:14:00.184 fused_ordering(502) 00:14:00.184 fused_ordering(503) 00:14:00.184 fused_ordering(504) 00:14:00.184 fused_ordering(505) 00:14:00.184 fused_ordering(506) 00:14:00.184 fused_ordering(507) 00:14:00.184 fused_ordering(508) 00:14:00.184 fused_ordering(509) 00:14:00.184 fused_ordering(510) 00:14:00.184 fused_ordering(511) 00:14:00.184 fused_ordering(512) 00:14:00.184 fused_ordering(513) 00:14:00.184 fused_ordering(514) 00:14:00.184 fused_ordering(515) 00:14:00.184 fused_ordering(516) 00:14:00.184 fused_ordering(517) 00:14:00.184 fused_ordering(518) 00:14:00.184 fused_ordering(519) 00:14:00.184 fused_ordering(520) 00:14:00.184 fused_ordering(521) 00:14:00.184 fused_ordering(522) 00:14:00.184 fused_ordering(523) 00:14:00.184 fused_ordering(524) 00:14:00.184 fused_ordering(525) 00:14:00.184 fused_ordering(526) 00:14:00.184 fused_ordering(527) 00:14:00.184 fused_ordering(528) 00:14:00.184 fused_ordering(529) 00:14:00.184 fused_ordering(530) 00:14:00.184 fused_ordering(531) 00:14:00.184 fused_ordering(532) 00:14:00.184 fused_ordering(533) 00:14:00.184 fused_ordering(534) 00:14:00.184 fused_ordering(535) 00:14:00.184 fused_ordering(536) 00:14:00.184 fused_ordering(537) 00:14:00.184 fused_ordering(538) 00:14:00.184 fused_ordering(539) 00:14:00.184 fused_ordering(540) 00:14:00.184 fused_ordering(541) 00:14:00.184 fused_ordering(542) 00:14:00.184 fused_ordering(543) 00:14:00.184 fused_ordering(544) 00:14:00.184 fused_ordering(545) 00:14:00.184 fused_ordering(546) 00:14:00.184 fused_ordering(547) 00:14:00.184 fused_ordering(548) 00:14:00.184 fused_ordering(549) 00:14:00.184 fused_ordering(550) 00:14:00.184 fused_ordering(551) 00:14:00.184 fused_ordering(552) 00:14:00.184 fused_ordering(553) 00:14:00.184 fused_ordering(554) 00:14:00.184 fused_ordering(555) 00:14:00.184 fused_ordering(556) 00:14:00.184 fused_ordering(557) 00:14:00.184 fused_ordering(558) 00:14:00.184 fused_ordering(559) 00:14:00.184 fused_ordering(560) 00:14:00.184 fused_ordering(561) 00:14:00.184 fused_ordering(562) 00:14:00.184 fused_ordering(563) 00:14:00.184 fused_ordering(564) 00:14:00.184 fused_ordering(565) 00:14:00.184 fused_ordering(566) 00:14:00.184 fused_ordering(567) 00:14:00.184 fused_ordering(568) 00:14:00.184 fused_ordering(569) 00:14:00.184 fused_ordering(570) 00:14:00.185 fused_ordering(571) 00:14:00.185 fused_ordering(572) 00:14:00.185 fused_ordering(573) 00:14:00.185 fused_ordering(574) 00:14:00.185 fused_ordering(575) 00:14:00.185 fused_ordering(576) 00:14:00.185 fused_ordering(577) 00:14:00.185 fused_ordering(578) 00:14:00.185 fused_ordering(579) 00:14:00.185 fused_ordering(580) 00:14:00.185 fused_ordering(581) 00:14:00.185 fused_ordering(582) 00:14:00.185 fused_ordering(583) 00:14:00.185 fused_ordering(584) 00:14:00.185 fused_ordering(585) 00:14:00.185 fused_ordering(586) 00:14:00.185 fused_ordering(587) 00:14:00.185 fused_ordering(588) 00:14:00.185 fused_ordering(589) 00:14:00.185 fused_ordering(590) 00:14:00.185 fused_ordering(591) 00:14:00.185 fused_ordering(592) 00:14:00.185 fused_ordering(593) 00:14:00.185 fused_ordering(594) 00:14:00.185 fused_ordering(595) 00:14:00.185 fused_ordering(596) 00:14:00.185 fused_ordering(597) 00:14:00.185 fused_ordering(598) 00:14:00.185 fused_ordering(599) 00:14:00.185 fused_ordering(600) 00:14:00.185 fused_ordering(601) 00:14:00.185 fused_ordering(602) 00:14:00.185 fused_ordering(603) 00:14:00.185 fused_ordering(604) 00:14:00.185 fused_ordering(605) 00:14:00.185 fused_ordering(606) 00:14:00.185 fused_ordering(607) 00:14:00.185 fused_ordering(608) 00:14:00.185 fused_ordering(609) 00:14:00.185 fused_ordering(610) 00:14:00.185 fused_ordering(611) 00:14:00.185 fused_ordering(612) 00:14:00.185 fused_ordering(613) 00:14:00.185 fused_ordering(614) 00:14:00.185 fused_ordering(615) 00:14:00.752 fused_ordering(616) 00:14:00.752 fused_ordering(617) 00:14:00.752 fused_ordering(618) 00:14:00.752 fused_ordering(619) 00:14:00.752 fused_ordering(620) 00:14:00.752 fused_ordering(621) 00:14:00.752 fused_ordering(622) 00:14:00.752 fused_ordering(623) 00:14:00.752 fused_ordering(624) 00:14:00.752 fused_ordering(625) 00:14:00.752 fused_ordering(626) 00:14:00.752 fused_ordering(627) 00:14:00.752 fused_ordering(628) 00:14:00.752 fused_ordering(629) 00:14:00.752 fused_ordering(630) 00:14:00.752 fused_ordering(631) 00:14:00.752 fused_ordering(632) 00:14:00.752 fused_ordering(633) 00:14:00.752 fused_ordering(634) 00:14:00.752 fused_ordering(635) 00:14:00.752 fused_ordering(636) 00:14:00.752 fused_ordering(637) 00:14:00.752 fused_ordering(638) 00:14:00.752 fused_ordering(639) 00:14:00.752 fused_ordering(640) 00:14:00.752 fused_ordering(641) 00:14:00.752 fused_ordering(642) 00:14:00.752 fused_ordering(643) 00:14:00.752 fused_ordering(644) 00:14:00.752 fused_ordering(645) 00:14:00.752 fused_ordering(646) 00:14:00.752 fused_ordering(647) 00:14:00.752 fused_ordering(648) 00:14:00.752 fused_ordering(649) 00:14:00.752 fused_ordering(650) 00:14:00.752 fused_ordering(651) 00:14:00.752 fused_ordering(652) 00:14:00.752 fused_ordering(653) 00:14:00.752 fused_ordering(654) 00:14:00.752 fused_ordering(655) 00:14:00.752 fused_ordering(656) 00:14:00.752 fused_ordering(657) 00:14:00.752 fused_ordering(658) 00:14:00.752 fused_ordering(659) 00:14:00.752 fused_ordering(660) 00:14:00.752 fused_ordering(661) 00:14:00.752 fused_ordering(662) 00:14:00.752 fused_ordering(663) 00:14:00.752 fused_ordering(664) 00:14:00.752 fused_ordering(665) 00:14:00.752 fused_ordering(666) 00:14:00.752 fused_ordering(667) 00:14:00.752 fused_ordering(668) 00:14:00.752 fused_ordering(669) 00:14:00.752 fused_ordering(670) 00:14:00.752 fused_ordering(671) 00:14:00.752 fused_ordering(672) 00:14:00.752 fused_ordering(673) 00:14:00.752 fused_ordering(674) 00:14:00.752 fused_ordering(675) 00:14:00.752 fused_ordering(676) 00:14:00.752 fused_ordering(677) 00:14:00.752 fused_ordering(678) 00:14:00.752 fused_ordering(679) 00:14:00.752 fused_ordering(680) 00:14:00.752 fused_ordering(681) 00:14:00.752 fused_ordering(682) 00:14:00.752 fused_ordering(683) 00:14:00.752 fused_ordering(684) 00:14:00.752 fused_ordering(685) 00:14:00.752 fused_ordering(686) 00:14:00.752 fused_ordering(687) 00:14:00.752 fused_ordering(688) 00:14:00.752 fused_ordering(689) 00:14:00.752 fused_ordering(690) 00:14:00.752 fused_ordering(691) 00:14:00.752 fused_ordering(692) 00:14:00.752 fused_ordering(693) 00:14:00.752 fused_ordering(694) 00:14:00.752 fused_ordering(695) 00:14:00.752 fused_ordering(696) 00:14:00.752 fused_ordering(697) 00:14:00.752 fused_ordering(698) 00:14:00.752 fused_ordering(699) 00:14:00.752 fused_ordering(700) 00:14:00.752 fused_ordering(701) 00:14:00.752 fused_ordering(702) 00:14:00.752 fused_ordering(703) 00:14:00.752 fused_ordering(704) 00:14:00.752 fused_ordering(705) 00:14:00.752 fused_ordering(706) 00:14:00.752 fused_ordering(707) 00:14:00.752 fused_ordering(708) 00:14:00.752 fused_ordering(709) 00:14:00.752 fused_ordering(710) 00:14:00.752 fused_ordering(711) 00:14:00.752 fused_ordering(712) 00:14:00.752 fused_ordering(713) 00:14:00.752 fused_ordering(714) 00:14:00.752 fused_ordering(715) 00:14:00.752 fused_ordering(716) 00:14:00.752 fused_ordering(717) 00:14:00.752 fused_ordering(718) 00:14:00.752 fused_ordering(719) 00:14:00.752 fused_ordering(720) 00:14:00.752 fused_ordering(721) 00:14:00.752 fused_ordering(722) 00:14:00.752 fused_ordering(723) 00:14:00.752 fused_ordering(724) 00:14:00.752 fused_ordering(725) 00:14:00.752 fused_ordering(726) 00:14:00.752 fused_ordering(727) 00:14:00.752 fused_ordering(728) 00:14:00.752 fused_ordering(729) 00:14:00.752 fused_ordering(730) 00:14:00.752 fused_ordering(731) 00:14:00.752 fused_ordering(732) 00:14:00.752 fused_ordering(733) 00:14:00.752 fused_ordering(734) 00:14:00.752 fused_ordering(735) 00:14:00.752 fused_ordering(736) 00:14:00.752 fused_ordering(737) 00:14:00.752 fused_ordering(738) 00:14:00.752 fused_ordering(739) 00:14:00.752 fused_ordering(740) 00:14:00.752 fused_ordering(741) 00:14:00.752 fused_ordering(742) 00:14:00.752 fused_ordering(743) 00:14:00.752 fused_ordering(744) 00:14:00.752 fused_ordering(745) 00:14:00.752 fused_ordering(746) 00:14:00.752 fused_ordering(747) 00:14:00.752 fused_ordering(748) 00:14:00.752 fused_ordering(749) 00:14:00.752 fused_ordering(750) 00:14:00.752 fused_ordering(751) 00:14:00.752 fused_ordering(752) 00:14:00.752 fused_ordering(753) 00:14:00.752 fused_ordering(754) 00:14:00.752 fused_ordering(755) 00:14:00.752 fused_ordering(756) 00:14:00.752 fused_ordering(757) 00:14:00.752 fused_ordering(758) 00:14:00.752 fused_ordering(759) 00:14:00.752 fused_ordering(760) 00:14:00.752 fused_ordering(761) 00:14:00.752 fused_ordering(762) 00:14:00.752 fused_ordering(763) 00:14:00.752 fused_ordering(764) 00:14:00.752 fused_ordering(765) 00:14:00.753 fused_ordering(766) 00:14:00.753 fused_ordering(767) 00:14:00.753 fused_ordering(768) 00:14:00.753 fused_ordering(769) 00:14:00.753 fused_ordering(770) 00:14:00.753 fused_ordering(771) 00:14:00.753 fused_ordering(772) 00:14:00.753 fused_ordering(773) 00:14:00.753 fused_ordering(774) 00:14:00.753 fused_ordering(775) 00:14:00.753 fused_ordering(776) 00:14:00.753 fused_ordering(777) 00:14:00.753 fused_ordering(778) 00:14:00.753 fused_ordering(779) 00:14:00.753 fused_ordering(780) 00:14:00.753 fused_ordering(781) 00:14:00.753 fused_ordering(782) 00:14:00.753 fused_ordering(783) 00:14:00.753 fused_ordering(784) 00:14:00.753 fused_ordering(785) 00:14:00.753 fused_ordering(786) 00:14:00.753 fused_ordering(787) 00:14:00.753 fused_ordering(788) 00:14:00.753 fused_ordering(789) 00:14:00.753 fused_ordering(790) 00:14:00.753 fused_ordering(791) 00:14:00.753 fused_ordering(792) 00:14:00.753 fused_ordering(793) 00:14:00.753 fused_ordering(794) 00:14:00.753 fused_ordering(795) 00:14:00.753 fused_ordering(796) 00:14:00.753 fused_ordering(797) 00:14:00.753 fused_ordering(798) 00:14:00.753 fused_ordering(799) 00:14:00.753 fused_ordering(800) 00:14:00.753 fused_ordering(801) 00:14:00.753 fused_ordering(802) 00:14:00.753 fused_ordering(803) 00:14:00.753 fused_ordering(804) 00:14:00.753 fused_ordering(805) 00:14:00.753 fused_ordering(806) 00:14:00.753 fused_ordering(807) 00:14:00.753 fused_ordering(808) 00:14:00.753 fused_ordering(809) 00:14:00.753 fused_ordering(810) 00:14:00.753 fused_ordering(811) 00:14:00.753 fused_ordering(812) 00:14:00.753 fused_ordering(813) 00:14:00.753 fused_ordering(814) 00:14:00.753 fused_ordering(815) 00:14:00.753 fused_ordering(816) 00:14:00.753 fused_ordering(817) 00:14:00.753 fused_ordering(818) 00:14:00.753 fused_ordering(819) 00:14:00.753 fused_ordering(820) 00:14:01.011 fused_ordering(821) 00:14:01.011 fused_ordering(822) 00:14:01.011 fused_ordering(823) 00:14:01.011 fused_ordering(824) 00:14:01.011 fused_ordering(825) 00:14:01.011 fused_ordering(826) 00:14:01.011 fused_ordering(827) 00:14:01.011 fused_ordering(828) 00:14:01.011 fused_ordering(829) 00:14:01.011 fused_ordering(830) 00:14:01.011 fused_ordering(831) 00:14:01.011 fused_ordering(832) 00:14:01.011 fused_ordering(833) 00:14:01.011 fused_ordering(834) 00:14:01.011 fused_ordering(835) 00:14:01.011 fused_ordering(836) 00:14:01.011 fused_ordering(837) 00:14:01.012 fused_ordering(838) 00:14:01.012 fused_ordering(839) 00:14:01.012 fused_ordering(840) 00:14:01.012 fused_ordering(841) 00:14:01.012 fused_ordering(842) 00:14:01.012 fused_ordering(843) 00:14:01.012 fused_ordering(844) 00:14:01.012 fused_ordering(845) 00:14:01.012 fused_ordering(846) 00:14:01.012 fused_ordering(847) 00:14:01.012 fused_ordering(848) 00:14:01.012 fused_ordering(849) 00:14:01.012 fused_ordering(850) 00:14:01.012 fused_ordering(851) 00:14:01.012 fused_ordering(852) 00:14:01.012 fused_ordering(853) 00:14:01.012 fused_ordering(854) 00:14:01.012 fused_ordering(855) 00:14:01.012 fused_ordering(856) 00:14:01.012 fused_ordering(857) 00:14:01.012 fused_ordering(858) 00:14:01.012 fused_ordering(859) 00:14:01.012 fused_ordering(860) 00:14:01.012 fused_ordering(861) 00:14:01.012 fused_ordering(862) 00:14:01.012 fused_ordering(863) 00:14:01.012 fused_ordering(864) 00:14:01.012 fused_ordering(865) 00:14:01.012 fused_ordering(866) 00:14:01.012 fused_ordering(867) 00:14:01.012 fused_ordering(868) 00:14:01.012 fused_ordering(869) 00:14:01.012 fused_ordering(870) 00:14:01.012 fused_ordering(871) 00:14:01.012 fused_ordering(872) 00:14:01.012 fused_ordering(873) 00:14:01.012 fused_ordering(874) 00:14:01.012 fused_ordering(875) 00:14:01.012 fused_ordering(876) 00:14:01.012 fused_ordering(877) 00:14:01.012 fused_ordering(878) 00:14:01.012 fused_ordering(879) 00:14:01.012 fused_ordering(880) 00:14:01.012 fused_ordering(881) 00:14:01.012 fused_ordering(882) 00:14:01.012 fused_ordering(883) 00:14:01.012 fused_ordering(884) 00:14:01.012 fused_ordering(885) 00:14:01.012 fused_ordering(886) 00:14:01.012 fused_ordering(887) 00:14:01.012 fused_ordering(888) 00:14:01.012 fused_ordering(889) 00:14:01.012 fused_ordering(890) 00:14:01.012 fused_ordering(891) 00:14:01.012 fused_ordering(892) 00:14:01.012 fused_ordering(893) 00:14:01.012 fused_ordering(894) 00:14:01.012 fused_ordering(895) 00:14:01.012 fused_ordering(896) 00:14:01.012 fused_ordering(897) 00:14:01.012 fused_ordering(898) 00:14:01.012 fused_ordering(899) 00:14:01.012 fused_ordering(900) 00:14:01.012 fused_ordering(901) 00:14:01.012 fused_ordering(902) 00:14:01.012 fused_ordering(903) 00:14:01.012 fused_ordering(904) 00:14:01.012 fused_ordering(905) 00:14:01.012 fused_ordering(906) 00:14:01.012 fused_ordering(907) 00:14:01.012 fused_ordering(908) 00:14:01.012 fused_ordering(909) 00:14:01.012 fused_ordering(910) 00:14:01.012 fused_ordering(911) 00:14:01.012 fused_ordering(912) 00:14:01.012 fused_ordering(913) 00:14:01.012 fused_ordering(914) 00:14:01.012 fused_ordering(915) 00:14:01.012 fused_ordering(916) 00:14:01.012 fused_ordering(917) 00:14:01.012 fused_ordering(918) 00:14:01.012 fused_ordering(919) 00:14:01.012 fused_ordering(920) 00:14:01.012 fused_ordering(921) 00:14:01.012 fused_ordering(922) 00:14:01.012 fused_ordering(923) 00:14:01.012 fused_ordering(924) 00:14:01.012 fused_ordering(925) 00:14:01.012 fused_ordering(926) 00:14:01.012 fused_ordering(927) 00:14:01.012 fused_ordering(928) 00:14:01.012 fused_ordering(929) 00:14:01.012 fused_ordering(930) 00:14:01.012 fused_ordering(931) 00:14:01.012 fused_ordering(932) 00:14:01.012 fused_ordering(933) 00:14:01.012 fused_ordering(934) 00:14:01.012 fused_ordering(935) 00:14:01.012 fused_ordering(936) 00:14:01.012 fused_ordering(937) 00:14:01.012 fused_ordering(938) 00:14:01.012 fused_ordering(939) 00:14:01.012 fused_ordering(940) 00:14:01.012 fused_ordering(941) 00:14:01.012 fused_ordering(942) 00:14:01.012 fused_ordering(943) 00:14:01.012 fused_ordering(944) 00:14:01.012 fused_ordering(945) 00:14:01.012 fused_ordering(946) 00:14:01.012 fused_ordering(947) 00:14:01.012 fused_ordering(948) 00:14:01.012 fused_ordering(949) 00:14:01.012 fused_ordering(950) 00:14:01.012 fused_ordering(951) 00:14:01.012 fused_ordering(952) 00:14:01.012 fused_ordering(953) 00:14:01.012 fused_ordering(954) 00:14:01.012 fused_ordering(955) 00:14:01.012 fused_ordering(956) 00:14:01.012 fused_ordering(957) 00:14:01.012 fused_ordering(958) 00:14:01.012 fused_ordering(959) 00:14:01.012 fused_ordering(960) 00:14:01.012 fused_ordering(961) 00:14:01.012 fused_ordering(962) 00:14:01.012 fused_ordering(963) 00:14:01.012 fused_ordering(964) 00:14:01.012 fused_ordering(965) 00:14:01.012 fused_ordering(966) 00:14:01.012 fused_ordering(967) 00:14:01.012 fused_ordering(968) 00:14:01.012 fused_ordering(969) 00:14:01.012 fused_ordering(970) 00:14:01.012 fused_ordering(971) 00:14:01.012 fused_ordering(972) 00:14:01.012 fused_ordering(973) 00:14:01.012 fused_ordering(974) 00:14:01.012 fused_ordering(975) 00:14:01.012 fused_ordering(976) 00:14:01.012 fused_ordering(977) 00:14:01.012 fused_ordering(978) 00:14:01.012 fused_ordering(979) 00:14:01.012 fused_ordering(980) 00:14:01.012 fused_ordering(981) 00:14:01.012 fused_ordering(982) 00:14:01.012 fused_ordering(983) 00:14:01.012 fused_ordering(984) 00:14:01.012 fused_ordering(985) 00:14:01.012 fused_ordering(986) 00:14:01.012 fused_ordering(987) 00:14:01.012 fused_ordering(988) 00:14:01.012 fused_ordering(989) 00:14:01.012 fused_ordering(990) 00:14:01.012 fused_ordering(991) 00:14:01.012 fused_ordering(992) 00:14:01.012 fused_ordering(993) 00:14:01.012 fused_ordering(994) 00:14:01.012 fused_ordering(995) 00:14:01.012 fused_ordering(996) 00:14:01.012 fused_ordering(997) 00:14:01.012 fused_ordering(998) 00:14:01.012 fused_ordering(999) 00:14:01.012 fused_ordering(1000) 00:14:01.012 fused_ordering(1001) 00:14:01.012 fused_ordering(1002) 00:14:01.012 fused_ordering(1003) 00:14:01.012 fused_ordering(1004) 00:14:01.012 fused_ordering(1005) 00:14:01.012 fused_ordering(1006) 00:14:01.012 fused_ordering(1007) 00:14:01.012 fused_ordering(1008) 00:14:01.012 fused_ordering(1009) 00:14:01.012 fused_ordering(1010) 00:14:01.012 fused_ordering(1011) 00:14:01.012 fused_ordering(1012) 00:14:01.012 fused_ordering(1013) 00:14:01.012 fused_ordering(1014) 00:14:01.012 fused_ordering(1015) 00:14:01.012 fused_ordering(1016) 00:14:01.012 fused_ordering(1017) 00:14:01.012 fused_ordering(1018) 00:14:01.012 fused_ordering(1019) 00:14:01.012 fused_ordering(1020) 00:14:01.012 fused_ordering(1021) 00:14:01.012 fused_ordering(1022) 00:14:01.012 fused_ordering(1023) 00:14:01.012 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:01.012 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:01.012 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:01.012 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:01.012 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:01.012 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:01.012 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:01.013 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:01.013 rmmod nvme_tcp 00:14:01.013 rmmod nvme_fabrics 00:14:01.013 rmmod nvme_keyring 00:14:01.271 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:01.271 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3693454 ']' 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3693454 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3693454 ']' 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3693454 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3693454 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3693454' 00:14:01.272 killing process with pid 3693454 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3693454 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3693454 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.272 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:03.809 00:14:03.809 real 0m10.741s 00:14:03.809 user 0m4.998s 00:14:03.809 sys 0m5.850s 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.809 ************************************ 00:14:03.809 END TEST nvmf_fused_ordering 00:14:03.809 ************************************ 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:03.809 ************************************ 00:14:03.809 START TEST nvmf_ns_masking 00:14:03.809 ************************************ 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.809 * Looking for test storage... 00:14:03.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:03.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.809 --rc genhtml_branch_coverage=1 00:14:03.809 --rc genhtml_function_coverage=1 00:14:03.809 --rc genhtml_legend=1 00:14:03.809 --rc geninfo_all_blocks=1 00:14:03.809 --rc geninfo_unexecuted_blocks=1 00:14:03.809 00:14:03.809 ' 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:03.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.809 --rc genhtml_branch_coverage=1 00:14:03.809 --rc genhtml_function_coverage=1 00:14:03.809 --rc genhtml_legend=1 00:14:03.809 --rc geninfo_all_blocks=1 00:14:03.809 --rc geninfo_unexecuted_blocks=1 00:14:03.809 00:14:03.809 ' 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:03.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.809 --rc genhtml_branch_coverage=1 00:14:03.809 --rc genhtml_function_coverage=1 00:14:03.809 --rc genhtml_legend=1 00:14:03.809 --rc geninfo_all_blocks=1 00:14:03.809 --rc geninfo_unexecuted_blocks=1 00:14:03.809 00:14:03.809 ' 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:03.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.809 --rc genhtml_branch_coverage=1 00:14:03.809 --rc genhtml_function_coverage=1 00:14:03.809 --rc genhtml_legend=1 00:14:03.809 --rc geninfo_all_blocks=1 00:14:03.809 --rc geninfo_unexecuted_blocks=1 00:14:03.809 00:14:03.809 ' 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.809 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a17c3a38-7b6e-4121-b738-3d52c524c2f6 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=541e6b05-bda8-4a06-be31-fa19ce1239c6 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=68189a01-6eb6-4163-8c58-e99712683bab 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:03.810 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:10.378 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:10.378 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:10.378 Found net devices under 0000:86:00.0: cvl_0_0 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:10.378 Found net devices under 0000:86:00.1: cvl_0_1 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:10.378 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:10.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:14:10.379 00:14:10.379 --- 10.0.0.2 ping statistics --- 00:14:10.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.379 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:14:10.379 00:14:10.379 --- 10.0.0.1 ping statistics --- 00:14:10.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.379 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3697590 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3697590 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3697590 ']' 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.379 [2024-11-26 19:15:32.763911] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:14:10.379 [2024-11-26 19:15:32.763962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.379 [2024-11-26 19:15:32.843992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.379 [2024-11-26 19:15:32.884333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.379 [2024-11-26 19:15:32.884368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.379 [2024-11-26 19:15:32.884375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.379 [2024-11-26 19:15:32.884380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.379 [2024-11-26 19:15:32.884385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.379 [2024-11-26 19:15:32.884948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.379 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.379 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.379 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:10.379 [2024-11-26 19:15:33.177429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.379 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:10.379 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:10.379 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:10.379 Malloc1 00:14:10.379 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:10.639 Malloc2 00:14:10.639 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:10.898 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:11.156 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.156 [2024-11-26 19:15:34.219634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.156 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:11.156 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68189a01-6eb6-4163-8c58-e99712683bab -a 10.0.0.2 -s 4420 -i 4 00:14:11.415 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.415 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:11.415 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.415 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:11.415 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.944 [ 0]:0x1 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df750fabb5e14b9b9527b5437a7c3c12 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df750fabb5e14b9b9527b5437a7c3c12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.944 [ 0]:0x1 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df750fabb5e14b9b9527b5437a7c3c12 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df750fabb5e14b9b9527b5437a7c3c12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.944 [ 1]:0x2 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=558b5dc883eb4de4b052bd78ffb8f0ae 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 558b5dc883eb4de4b052bd78ffb8f0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.944 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.202 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:14.461 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:14.461 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68189a01-6eb6-4163-8c58-e99712683bab -a 10.0.0.2 -s 4420 -i 4 00:14:14.461 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:14.461 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:14.461 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.461 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:14.461 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:14.461 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:16.994 [ 0]:0x2 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=558b5dc883eb4de4b052bd78ffb8f0ae 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 558b5dc883eb4de4b052bd78ffb8f0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:16.994 [ 0]:0x1 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.994 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df750fabb5e14b9b9527b5437a7c3c12 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df750fabb5e14b9b9527b5437a7c3c12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:16.994 [ 1]:0x2 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=558b5dc883eb4de4b052bd78ffb8f0ae 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 558b5dc883eb4de4b052bd78ffb8f0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.994 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.252 [ 0]:0x2 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.252 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.511 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=558b5dc883eb4de4b052bd78ffb8f0ae 00:14:17.511 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 558b5dc883eb4de4b052bd78ffb8f0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.511 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:17.511 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.511 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.511 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:17.511 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68189a01-6eb6-4163-8c58-e99712683bab -a 10.0.0.2 -s 4420 -i 4 00:14:17.769 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:17.769 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:17.769 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.769 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:17.769 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:17.769 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.301 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.301 [ 0]:0x1 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df750fabb5e14b9b9527b5437a7c3c12 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df750fabb5e14b9b9527b5437a7c3c12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.301 [ 1]:0x2 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=558b5dc883eb4de4b052bd78ffb8f0ae 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 558b5dc883eb4de4b052bd78ffb8f0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.301 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.560 [ 0]:0x2 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=558b5dc883eb4de4b052bd78ffb8f0ae 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 558b5dc883eb4de4b052bd78ffb8f0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:20.560 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.560 [2024-11-26 19:15:43.658333] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:20.560 request: 00:14:20.560 { 00:14:20.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.560 "nsid": 2, 00:14:20.560 "host": "nqn.2016-06.io.spdk:host1", 00:14:20.560 "method": "nvmf_ns_remove_host", 00:14:20.560 "req_id": 1 00:14:20.560 } 00:14:20.560 Got JSON-RPC error response 00:14:20.560 response: 00:14:20.560 { 00:14:20.560 "code": -32602, 00:14:20.560 "message": "Invalid parameters" 00:14:20.560 } 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.820 [ 0]:0x2 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=558b5dc883eb4de4b052bd78ffb8f0ae 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 558b5dc883eb4de4b052bd78ffb8f0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:20.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3699571 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3699571 /var/tmp/host.sock 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3699571 ']' 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:21.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.079 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.079 [2024-11-26 19:15:44.044559] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:14:21.079 [2024-11-26 19:15:44.044605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699571 ] 00:14:21.079 [2024-11-26 19:15:44.119340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.079 [2024-11-26 19:15:44.159460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.337 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.337 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:21.337 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.595 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.854 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a17c3a38-7b6e-4121-b738-3d52c524c2f6 00:14:21.854 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:21.854 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A17C3A387B6E4121B7383D52C524C2F6 -i 00:14:21.854 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 541e6b05-bda8-4a06-be31-fa19ce1239c6 00:14:21.854 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:21.854 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 541E6B05BDA84A06BE31FA19CE1239C6 -i 00:14:22.112 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.387 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:22.645 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:22.645 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:22.903 nvme0n1 00:14:22.903 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:22.903 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:23.160 nvme1n2 00:14:23.160 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:23.160 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:23.160 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:23.160 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:23.160 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:23.417 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:23.417 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:23.417 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:23.417 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:23.678 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a17c3a38-7b6e-4121-b738-3d52c524c2f6 == \a\1\7\c\3\a\3\8\-\7\b\6\e\-\4\1\2\1\-\b\7\3\8\-\3\d\5\2\c\5\2\4\c\2\f\6 ]] 00:14:23.678 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:23.678 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:23.678 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:23.946 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 541e6b05-bda8-4a06-be31-fa19ce1239c6 == \5\4\1\e\6\b\0\5\-\b\d\a\8\-\4\a\0\6\-\b\e\3\1\-\f\a\1\9\c\e\1\2\3\9\c\6 ]] 00:14:23.946 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.204 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.204 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid a17c3a38-7b6e-4121-b738-3d52c524c2f6 00:14:24.204 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:24.204 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A17C3A387B6E4121B7383D52C524C2F6 00:14:24.204 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A17C3A387B6E4121B7383D52C524C2F6 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:24.205 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A17C3A387B6E4121B7383D52C524C2F6 00:14:24.463 [2024-11-26 19:15:47.452739] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:24.463 [2024-11-26 19:15:47.452767] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:24.463 [2024-11-26 19:15:47.452775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.463 request: 00:14:24.463 { 00:14:24.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.463 "namespace": { 00:14:24.463 "bdev_name": "invalid", 00:14:24.463 "nsid": 1, 00:14:24.463 "nguid": "A17C3A387B6E4121B7383D52C524C2F6", 00:14:24.463 "no_auto_visible": false 00:14:24.463 }, 00:14:24.463 "method": "nvmf_subsystem_add_ns", 00:14:24.463 "req_id": 1 00:14:24.463 } 00:14:24.463 Got JSON-RPC error response 00:14:24.463 response: 00:14:24.463 { 00:14:24.463 "code": -32602, 00:14:24.463 "message": "Invalid parameters" 00:14:24.463 } 00:14:24.463 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:24.463 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:24.463 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:24.463 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:24.463 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid a17c3a38-7b6e-4121-b738-3d52c524c2f6 00:14:24.463 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:24.463 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A17C3A387B6E4121B7383D52C524C2F6 -i 00:14:24.721 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:26.623 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:26.623 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:26.623 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3699571 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3699571 ']' 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3699571 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699571 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699571' 00:14:26.881 killing process with pid 3699571 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3699571 00:14:26.881 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3699571 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:27.449 rmmod nvme_tcp 00:14:27.449 rmmod nvme_fabrics 00:14:27.449 rmmod nvme_keyring 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3697590 ']' 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3697590 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3697590 ']' 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3697590 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.449 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3697590 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3697590' 00:14:27.708 killing process with pid 3697590 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3697590 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3697590 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.708 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.243 00:14:30.243 real 0m26.341s 00:14:30.243 user 0m31.398s 00:14:30.243 sys 0m7.136s 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 ************************************ 00:14:30.243 END TEST nvmf_ns_masking 00:14:30.243 ************************************ 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 ************************************ 00:14:30.243 START TEST nvmf_nvme_cli 00:14:30.243 ************************************ 00:14:30.243 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.243 * Looking for test storage... 00:14:30.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.243 --rc genhtml_branch_coverage=1 00:14:30.243 --rc genhtml_function_coverage=1 00:14:30.243 --rc genhtml_legend=1 00:14:30.243 --rc geninfo_all_blocks=1 00:14:30.243 --rc geninfo_unexecuted_blocks=1 00:14:30.243 00:14:30.243 ' 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.243 --rc genhtml_branch_coverage=1 00:14:30.243 --rc genhtml_function_coverage=1 00:14:30.243 --rc genhtml_legend=1 00:14:30.243 --rc geninfo_all_blocks=1 00:14:30.243 --rc geninfo_unexecuted_blocks=1 00:14:30.243 00:14:30.243 ' 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.243 --rc genhtml_branch_coverage=1 00:14:30.243 --rc genhtml_function_coverage=1 00:14:30.243 --rc genhtml_legend=1 00:14:30.243 --rc geninfo_all_blocks=1 00:14:30.243 --rc geninfo_unexecuted_blocks=1 00:14:30.243 00:14:30.243 ' 00:14:30.243 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.243 --rc genhtml_branch_coverage=1 00:14:30.243 --rc genhtml_function_coverage=1 00:14:30.243 --rc genhtml_legend=1 00:14:30.243 --rc geninfo_all_blocks=1 00:14:30.243 --rc geninfo_unexecuted_blocks=1 00:14:30.243 00:14:30.244 ' 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.244 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.813 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:36.814 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:36.814 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:36.814 Found net devices under 0000:86:00.0: cvl_0_0 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:36.814 Found net devices under 0000:86:00.1: cvl_0_1 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.814 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:36.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:14:36.814 00:14:36.814 --- 10.0.0.2 ping statistics --- 00:14:36.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.814 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:14:36.814 00:14:36.814 --- 10.0.0.1 ping statistics --- 00:14:36.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.814 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3704104 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3704104 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3704104 ']' 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.814 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.814 [2024-11-26 19:15:59.115299] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:14:36.814 [2024-11-26 19:15:59.115338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.814 [2024-11-26 19:15:59.193380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.815 [2024-11-26 19:15:59.240865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.815 [2024-11-26 19:15:59.240898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.815 [2024-11-26 19:15:59.240906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.815 [2024-11-26 19:15:59.240913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.815 [2024-11-26 19:15:59.240918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.815 [2024-11-26 19:15:59.242474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.815 [2024-11-26 19:15:59.242583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.815 [2024-11-26 19:15:59.242712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.815 [2024-11-26 19:15:59.242713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 [2024-11-26 19:15:59.990836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.073 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 Malloc0 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 Malloc1 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 [2024-11-26 19:16:00.095505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.073 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:37.330 00:14:37.330 Discovery Log Number of Records 2, Generation counter 2 00:14:37.330 =====Discovery Log Entry 0====== 00:14:37.330 trtype: tcp 00:14:37.330 adrfam: ipv4 00:14:37.330 subtype: current discovery subsystem 00:14:37.330 treq: not required 00:14:37.330 portid: 0 00:14:37.330 trsvcid: 4420 00:14:37.330 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:37.330 traddr: 10.0.0.2 00:14:37.330 eflags: explicit discovery connections, duplicate discovery information 00:14:37.330 sectype: none 00:14:37.330 =====Discovery Log Entry 1====== 00:14:37.330 trtype: tcp 00:14:37.330 adrfam: ipv4 00:14:37.330 subtype: nvme subsystem 00:14:37.330 treq: not required 00:14:37.330 portid: 0 00:14:37.330 trsvcid: 4420 00:14:37.330 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:37.330 traddr: 10.0.0.2 00:14:37.330 eflags: none 00:14:37.330 sectype: none 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:37.330 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:38.700 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:38.700 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:38.700 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.700 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:38.700 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:38.700 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:40.597 /dev/nvme0n2 ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:40.597 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:40.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.855 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:40.855 rmmod nvme_tcp 00:14:40.855 rmmod nvme_fabrics 00:14:41.114 rmmod nvme_keyring 00:14:41.114 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:41.114 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:41.114 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:41.114 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3704104 ']' 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3704104 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3704104 ']' 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3704104 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704104 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704104' 00:14:41.114 killing process with pid 3704104 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3704104 00:14:41.114 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3704104 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.373 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.277 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:43.277 00:14:43.277 real 0m13.418s 00:14:43.277 user 0m21.803s 00:14:43.277 sys 0m5.140s 00:14:43.277 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.277 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.277 ************************************ 00:14:43.277 END TEST nvmf_nvme_cli 00:14:43.277 ************************************ 00:14:43.277 19:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:43.277 19:16:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:43.277 19:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.277 19:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.277 19:16:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.536 ************************************ 00:14:43.536 START TEST nvmf_vfio_user 00:14:43.536 ************************************ 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:43.536 * Looking for test storage... 00:14:43.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.536 --rc genhtml_branch_coverage=1 00:14:43.536 --rc genhtml_function_coverage=1 00:14:43.536 --rc genhtml_legend=1 00:14:43.536 --rc geninfo_all_blocks=1 00:14:43.536 --rc geninfo_unexecuted_blocks=1 00:14:43.536 00:14:43.536 ' 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.536 --rc genhtml_branch_coverage=1 00:14:43.536 --rc genhtml_function_coverage=1 00:14:43.536 --rc genhtml_legend=1 00:14:43.536 --rc geninfo_all_blocks=1 00:14:43.536 --rc geninfo_unexecuted_blocks=1 00:14:43.536 00:14:43.536 ' 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.536 --rc genhtml_branch_coverage=1 00:14:43.536 --rc genhtml_function_coverage=1 00:14:43.536 --rc genhtml_legend=1 00:14:43.536 --rc geninfo_all_blocks=1 00:14:43.536 --rc geninfo_unexecuted_blocks=1 00:14:43.536 00:14:43.536 ' 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.536 --rc genhtml_branch_coverage=1 00:14:43.536 --rc genhtml_function_coverage=1 00:14:43.536 --rc genhtml_legend=1 00:14:43.536 --rc geninfo_all_blocks=1 00:14:43.536 --rc geninfo_unexecuted_blocks=1 00:14:43.536 00:14:43.536 ' 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.536 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3705593 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3705593' 00:14:43.537 Process pid: 3705593 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3705593 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3705593 ']' 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.537 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:43.795 [2024-11-26 19:16:06.677984] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:14:43.795 [2024-11-26 19:16:06.678034] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.795 [2024-11-26 19:16:06.750889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.795 [2024-11-26 19:16:06.791783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.795 [2024-11-26 19:16:06.791824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.795 [2024-11-26 19:16:06.791831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.795 [2024-11-26 19:16:06.791836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.795 [2024-11-26 19:16:06.791842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.795 [2024-11-26 19:16:06.793385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.795 [2024-11-26 19:16:06.793494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.795 [2024-11-26 19:16:06.793601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.795 [2024-11-26 19:16:06.793602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.795 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.795 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:43.795 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:45.166 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:45.166 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:45.166 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:45.166 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:45.166 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:45.166 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:45.423 Malloc1 00:14:45.424 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:45.424 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:45.681 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:45.939 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:45.939 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:45.939 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:46.201 Malloc2 00:14:46.202 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:46.461 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:46.461 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:46.719 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:46.719 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:46.719 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:46.719 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:46.719 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:46.719 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:46.719 [2024-11-26 19:16:09.767495] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:14:46.719 [2024-11-26 19:16:09.767520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706106 ] 00:14:46.719 [2024-11-26 19:16:09.806789] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:46.719 [2024-11-26 19:16:09.815049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.719 [2024-11-26 19:16:09.815073] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2be62f3000 00:14:46.719 [2024-11-26 19:16:09.816047] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.719 [2024-11-26 19:16:09.817053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.719 [2024-11-26 19:16:09.818057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.719 [2024-11-26 19:16:09.819056] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.719 [2024-11-26 19:16:09.820061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.719 [2024-11-26 19:16:09.821061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.719 [2024-11-26 19:16:09.822073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.719 [2024-11-26 19:16:09.823078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.719 [2024-11-26 19:16:09.824089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.719 [2024-11-26 19:16:09.824098] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2be62e8000 00:14:46.719 [2024-11-26 19:16:09.825012] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.980 [2024-11-26 19:16:09.838932] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:46.980 [2024-11-26 19:16:09.838957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:46.980 [2024-11-26 19:16:09.841191] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:46.980 [2024-11-26 19:16:09.841227] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:46.980 [2024-11-26 19:16:09.841298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:46.980 [2024-11-26 19:16:09.841313] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:46.980 [2024-11-26 19:16:09.841318] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:46.980 [2024-11-26 19:16:09.842197] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:46.980 [2024-11-26 19:16:09.842208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:46.980 [2024-11-26 19:16:09.842215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:46.980 [2024-11-26 19:16:09.843200] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:46.980 [2024-11-26 19:16:09.843209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:46.980 [2024-11-26 19:16:09.843215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:46.980 [2024-11-26 19:16:09.844207] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:46.980 [2024-11-26 19:16:09.844215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:46.980 [2024-11-26 19:16:09.845210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:46.980 [2024-11-26 19:16:09.845218] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:46.980 [2024-11-26 19:16:09.845223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:46.980 [2024-11-26 19:16:09.845229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:46.980 [2024-11-26 19:16:09.845336] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:46.980 [2024-11-26 19:16:09.845340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:46.980 [2024-11-26 19:16:09.845345] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:46.981 [2024-11-26 19:16:09.846218] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:46.981 [2024-11-26 19:16:09.847219] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:46.981 [2024-11-26 19:16:09.848224] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:46.981 [2024-11-26 19:16:09.849223] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.981 [2024-11-26 19:16:09.849300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:46.981 [2024-11-26 19:16:09.850230] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:46.981 [2024-11-26 19:16:09.850238] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:46.981 [2024-11-26 19:16:09.850242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:46.981 [2024-11-26 19:16:09.850265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850283] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.981 [2024-11-26 19:16:09.850288] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.981 [2024-11-26 19:16:09.850292] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.981 [2024-11-26 19:16:09.850305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.981 [2024-11-26 19:16:09.850353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:46.981 [2024-11-26 19:16:09.850363] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:46.981 [2024-11-26 19:16:09.850367] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:46.981 [2024-11-26 19:16:09.850371] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:46.981 [2024-11-26 19:16:09.850377] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:46.981 [2024-11-26 19:16:09.850382] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:46.981 [2024-11-26 19:16:09.850386] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:46.981 [2024-11-26 19:16:09.850390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:46.981 [2024-11-26 19:16:09.850415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:46.981 [2024-11-26 19:16:09.850426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.981 [2024-11-26 19:16:09.850433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.981 [2024-11-26 19:16:09.850441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.981 [2024-11-26 19:16:09.850448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.981 [2024-11-26 19:16:09.850453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:46.981 [2024-11-26 19:16:09.850481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:46.981 [2024-11-26 19:16:09.850487] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:46.981 [2024-11-26 19:16:09.850492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.981 [2024-11-26 19:16:09.850524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:46.981 [2024-11-26 19:16:09.850573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850586] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:46.981 [2024-11-26 19:16:09.850592] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:46.981 [2024-11-26 19:16:09.850595] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.981 [2024-11-26 19:16:09.850601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:46.981 [2024-11-26 19:16:09.850612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:46.981 [2024-11-26 19:16:09.850622] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:46.981 [2024-11-26 19:16:09.850633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850646] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.981 [2024-11-26 19:16:09.850649] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.981 [2024-11-26 19:16:09.850652] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.981 [2024-11-26 19:16:09.850657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.981 [2024-11-26 19:16:09.850680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:46.981 [2024-11-26 19:16:09.850691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:46.981 [2024-11-26 19:16:09.850704] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.981 [2024-11-26 19:16:09.850707] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.981 [2024-11-26 19:16:09.850710] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.981 [2024-11-26 19:16:09.850716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.981 [2024-11-26 19:16:09.850725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:46.982 [2024-11-26 19:16:09.850734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:46.982 [2024-11-26 19:16:09.850740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:46.982 [2024-11-26 19:16:09.850746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:46.982 [2024-11-26 19:16:09.850752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:46.982 [2024-11-26 19:16:09.850756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:46.982 [2024-11-26 19:16:09.850761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:46.982 [2024-11-26 19:16:09.850766] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:46.982 [2024-11-26 19:16:09.850772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:46.982 [2024-11-26 19:16:09.850777] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:46.982 [2024-11-26 19:16:09.850795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:46.982 [2024-11-26 19:16:09.850803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:46.982 [2024-11-26 19:16:09.850814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:46.982 [2024-11-26 19:16:09.850822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:46.982 [2024-11-26 19:16:09.850831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:46.982 [2024-11-26 19:16:09.850843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:46.982 [2024-11-26 19:16:09.850853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.982 [2024-11-26 19:16:09.850863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:46.982 [2024-11-26 19:16:09.850875] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:46.982 [2024-11-26 19:16:09.850879] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:46.982 [2024-11-26 19:16:09.850883] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:46.982 [2024-11-26 19:16:09.850886] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:46.982 [2024-11-26 19:16:09.850888] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:46.982 [2024-11-26 19:16:09.850894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:46.982 [2024-11-26 19:16:09.850900] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:46.982 [2024-11-26 19:16:09.850904] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:46.982 [2024-11-26 19:16:09.850907] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.982 [2024-11-26 19:16:09.850912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:46.982 [2024-11-26 19:16:09.850918] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:46.982 [2024-11-26 19:16:09.850922] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.982 [2024-11-26 19:16:09.850925] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.982 [2024-11-26 19:16:09.850930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.982 [2024-11-26 19:16:09.850936] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:46.982 [2024-11-26 19:16:09.850940] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:46.982 [2024-11-26 19:16:09.850943] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.982 [2024-11-26 19:16:09.850948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:46.982 [2024-11-26 19:16:09.850955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:46.982 [2024-11-26 19:16:09.850966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:46.982 [2024-11-26 19:16:09.850975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:46.982 [2024-11-26 19:16:09.850981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:46.982 ===================================================== 00:14:46.982 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.982 ===================================================== 00:14:46.982 Controller Capabilities/Features 00:14:46.982 ================================ 00:14:46.982 Vendor ID: 4e58 00:14:46.982 Subsystem Vendor ID: 4e58 00:14:46.982 Serial Number: SPDK1 00:14:46.982 Model Number: SPDK bdev Controller 00:14:46.982 Firmware Version: 25.01 00:14:46.982 Recommended Arb Burst: 6 00:14:46.982 IEEE OUI Identifier: 8d 6b 50 00:14:46.982 Multi-path I/O 00:14:46.982 May have multiple subsystem ports: Yes 00:14:46.982 May have multiple controllers: Yes 00:14:46.982 Associated with SR-IOV VF: No 00:14:46.982 Max Data Transfer Size: 131072 00:14:46.982 Max Number of Namespaces: 32 00:14:46.982 Max Number of I/O Queues: 127 00:14:46.982 NVMe Specification Version (VS): 1.3 00:14:46.982 NVMe Specification Version (Identify): 1.3 00:14:46.982 Maximum Queue Entries: 256 00:14:46.982 Contiguous Queues Required: Yes 00:14:46.982 Arbitration Mechanisms Supported 00:14:46.982 Weighted Round Robin: Not Supported 00:14:46.982 Vendor Specific: Not Supported 00:14:46.982 Reset Timeout: 15000 ms 00:14:46.982 Doorbell Stride: 4 bytes 00:14:46.982 NVM Subsystem Reset: Not Supported 00:14:46.982 Command Sets Supported 00:14:46.982 NVM Command Set: Supported 00:14:46.982 Boot Partition: Not Supported 00:14:46.982 Memory Page Size Minimum: 4096 bytes 00:14:46.982 Memory Page Size Maximum: 4096 bytes 00:14:46.982 Persistent Memory Region: Not Supported 00:14:46.982 Optional Asynchronous Events Supported 00:14:46.982 Namespace Attribute Notices: Supported 00:14:46.982 Firmware Activation Notices: Not Supported 00:14:46.982 ANA Change Notices: Not Supported 00:14:46.982 PLE Aggregate Log Change Notices: Not Supported 00:14:46.982 LBA Status Info Alert Notices: Not Supported 00:14:46.982 EGE Aggregate Log Change Notices: Not Supported 00:14:46.982 Normal NVM Subsystem Shutdown event: Not Supported 00:14:46.982 Zone Descriptor Change Notices: Not Supported 00:14:46.982 Discovery Log Change Notices: Not Supported 00:14:46.982 Controller Attributes 00:14:46.982 128-bit Host Identifier: Supported 00:14:46.982 Non-Operational Permissive Mode: Not Supported 00:14:46.982 NVM Sets: Not Supported 00:14:46.982 Read Recovery Levels: Not Supported 00:14:46.982 Endurance Groups: Not Supported 00:14:46.982 Predictable Latency Mode: Not Supported 00:14:46.982 Traffic Based Keep ALive: Not Supported 00:14:46.982 Namespace Granularity: Not Supported 00:14:46.982 SQ Associations: Not Supported 00:14:46.982 UUID List: Not Supported 00:14:46.982 Multi-Domain Subsystem: Not Supported 00:14:46.982 Fixed Capacity Management: Not Supported 00:14:46.982 Variable Capacity Management: Not Supported 00:14:46.982 Delete Endurance Group: Not Supported 00:14:46.982 Delete NVM Set: Not Supported 00:14:46.982 Extended LBA Formats Supported: Not Supported 00:14:46.982 Flexible Data Placement Supported: Not Supported 00:14:46.982 00:14:46.982 Controller Memory Buffer Support 00:14:46.982 ================================ 00:14:46.982 Supported: No 00:14:46.982 00:14:46.982 Persistent Memory Region Support 00:14:46.982 ================================ 00:14:46.982 Supported: No 00:14:46.982 00:14:46.982 Admin Command Set Attributes 00:14:46.982 ============================ 00:14:46.983 Security Send/Receive: Not Supported 00:14:46.983 Format NVM: Not Supported 00:14:46.983 Firmware Activate/Download: Not Supported 00:14:46.983 Namespace Management: Not Supported 00:14:46.983 Device Self-Test: Not Supported 00:14:46.983 Directives: Not Supported 00:14:46.983 NVMe-MI: Not Supported 00:14:46.983 Virtualization Management: Not Supported 00:14:46.983 Doorbell Buffer Config: Not Supported 00:14:46.983 Get LBA Status Capability: Not Supported 00:14:46.983 Command & Feature Lockdown Capability: Not Supported 00:14:46.983 Abort Command Limit: 4 00:14:46.983 Async Event Request Limit: 4 00:14:46.983 Number of Firmware Slots: N/A 00:14:46.983 Firmware Slot 1 Read-Only: N/A 00:14:46.983 Firmware Activation Without Reset: N/A 00:14:46.983 Multiple Update Detection Support: N/A 00:14:46.983 Firmware Update Granularity: No Information Provided 00:14:46.983 Per-Namespace SMART Log: No 00:14:46.983 Asymmetric Namespace Access Log Page: Not Supported 00:14:46.983 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:46.983 Command Effects Log Page: Supported 00:14:46.983 Get Log Page Extended Data: Supported 00:14:46.983 Telemetry Log Pages: Not Supported 00:14:46.983 Persistent Event Log Pages: Not Supported 00:14:46.983 Supported Log Pages Log Page: May Support 00:14:46.983 Commands Supported & Effects Log Page: Not Supported 00:14:46.983 Feature Identifiers & Effects Log Page:May Support 00:14:46.983 NVMe-MI Commands & Effects Log Page: May Support 00:14:46.983 Data Area 4 for Telemetry Log: Not Supported 00:14:46.983 Error Log Page Entries Supported: 128 00:14:46.983 Keep Alive: Supported 00:14:46.983 Keep Alive Granularity: 10000 ms 00:14:46.983 00:14:46.983 NVM Command Set Attributes 00:14:46.983 ========================== 00:14:46.983 Submission Queue Entry Size 00:14:46.983 Max: 64 00:14:46.983 Min: 64 00:14:46.983 Completion Queue Entry Size 00:14:46.983 Max: 16 00:14:46.983 Min: 16 00:14:46.983 Number of Namespaces: 32 00:14:46.983 Compare Command: Supported 00:14:46.983 Write Uncorrectable Command: Not Supported 00:14:46.983 Dataset Management Command: Supported 00:14:46.983 Write Zeroes Command: Supported 00:14:46.983 Set Features Save Field: Not Supported 00:14:46.983 Reservations: Not Supported 00:14:46.983 Timestamp: Not Supported 00:14:46.983 Copy: Supported 00:14:46.983 Volatile Write Cache: Present 00:14:46.983 Atomic Write Unit (Normal): 1 00:14:46.983 Atomic Write Unit (PFail): 1 00:14:46.983 Atomic Compare & Write Unit: 1 00:14:46.983 Fused Compare & Write: Supported 00:14:46.983 Scatter-Gather List 00:14:46.983 SGL Command Set: Supported (Dword aligned) 00:14:46.983 SGL Keyed: Not Supported 00:14:46.983 SGL Bit Bucket Descriptor: Not Supported 00:14:46.983 SGL Metadata Pointer: Not Supported 00:14:46.983 Oversized SGL: Not Supported 00:14:46.983 SGL Metadata Address: Not Supported 00:14:46.983 SGL Offset: Not Supported 00:14:46.983 Transport SGL Data Block: Not Supported 00:14:46.983 Replay Protected Memory Block: Not Supported 00:14:46.983 00:14:46.983 Firmware Slot Information 00:14:46.983 ========================= 00:14:46.983 Active slot: 1 00:14:46.983 Slot 1 Firmware Revision: 25.01 00:14:46.983 00:14:46.983 00:14:46.983 Commands Supported and Effects 00:14:46.983 ============================== 00:14:46.983 Admin Commands 00:14:46.983 -------------- 00:14:46.983 Get Log Page (02h): Supported 00:14:46.983 Identify (06h): Supported 00:14:46.983 Abort (08h): Supported 00:14:46.983 Set Features (09h): Supported 00:14:46.983 Get Features (0Ah): Supported 00:14:46.983 Asynchronous Event Request (0Ch): Supported 00:14:46.983 Keep Alive (18h): Supported 00:14:46.983 I/O Commands 00:14:46.983 ------------ 00:14:46.983 Flush (00h): Supported LBA-Change 00:14:46.983 Write (01h): Supported LBA-Change 00:14:46.983 Read (02h): Supported 00:14:46.983 Compare (05h): Supported 00:14:46.983 Write Zeroes (08h): Supported LBA-Change 00:14:46.983 Dataset Management (09h): Supported LBA-Change 00:14:46.983 Copy (19h): Supported LBA-Change 00:14:46.983 00:14:46.983 Error Log 00:14:46.983 ========= 00:14:46.983 00:14:46.983 Arbitration 00:14:46.983 =========== 00:14:46.983 Arbitration Burst: 1 00:14:46.983 00:14:46.983 Power Management 00:14:46.983 ================ 00:14:46.983 Number of Power States: 1 00:14:46.983 Current Power State: Power State #0 00:14:46.983 Power State #0: 00:14:46.983 Max Power: 0.00 W 00:14:46.983 Non-Operational State: Operational 00:14:46.983 Entry Latency: Not Reported 00:14:46.983 Exit Latency: Not Reported 00:14:46.983 Relative Read Throughput: 0 00:14:46.983 Relative Read Latency: 0 00:14:46.983 Relative Write Throughput: 0 00:14:46.983 Relative Write Latency: 0 00:14:46.983 Idle Power: Not Reported 00:14:46.983 Active Power: Not Reported 00:14:46.983 Non-Operational Permissive Mode: Not Supported 00:14:46.983 00:14:46.983 Health Information 00:14:46.983 ================== 00:14:46.983 Critical Warnings: 00:14:46.983 Available Spare Space: OK 00:14:46.983 Temperature: OK 00:14:46.983 Device Reliability: OK 00:14:46.983 Read Only: No 00:14:46.983 Volatile Memory Backup: OK 00:14:46.983 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:46.983 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:46.983 Available Spare: 0% 00:14:46.983 Available Sp[2024-11-26 19:16:09.851062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:46.983 [2024-11-26 19:16:09.851072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:46.983 [2024-11-26 19:16:09.851098] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:46.983 [2024-11-26 19:16:09.851107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.983 [2024-11-26 19:16:09.851112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.983 [2024-11-26 19:16:09.851118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.983 [2024-11-26 19:16:09.851123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.983 [2024-11-26 19:16:09.853677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:46.983 [2024-11-26 19:16:09.853689] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:46.984 [2024-11-26 19:16:09.854256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.984 [2024-11-26 19:16:09.854308] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:46.984 [2024-11-26 19:16:09.854314] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:46.984 [2024-11-26 19:16:09.855259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:46.984 [2024-11-26 19:16:09.855269] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:46.984 [2024-11-26 19:16:09.855318] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:46.984 [2024-11-26 19:16:09.856281] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.984 are Threshold: 0% 00:14:46.984 Life Percentage Used: 0% 00:14:46.984 Data Units Read: 0 00:14:46.984 Data Units Written: 0 00:14:46.984 Host Read Commands: 0 00:14:46.984 Host Write Commands: 0 00:14:46.984 Controller Busy Time: 0 minutes 00:14:46.984 Power Cycles: 0 00:14:46.984 Power On Hours: 0 hours 00:14:46.984 Unsafe Shutdowns: 0 00:14:46.984 Unrecoverable Media Errors: 0 00:14:46.984 Lifetime Error Log Entries: 0 00:14:46.984 Warning Temperature Time: 0 minutes 00:14:46.984 Critical Temperature Time: 0 minutes 00:14:46.984 00:14:46.984 Number of Queues 00:14:46.984 ================ 00:14:46.984 Number of I/O Submission Queues: 127 00:14:46.984 Number of I/O Completion Queues: 127 00:14:46.984 00:14:46.984 Active Namespaces 00:14:46.984 ================= 00:14:46.984 Namespace ID:1 00:14:46.984 Error Recovery Timeout: Unlimited 00:14:46.984 Command Set Identifier: NVM (00h) 00:14:46.984 Deallocate: Supported 00:14:46.984 Deallocated/Unwritten Error: Not Supported 00:14:46.984 Deallocated Read Value: Unknown 00:14:46.984 Deallocate in Write Zeroes: Not Supported 00:14:46.984 Deallocated Guard Field: 0xFFFF 00:14:46.984 Flush: Supported 00:14:46.984 Reservation: Supported 00:14:46.984 Namespace Sharing Capabilities: Multiple Controllers 00:14:46.984 Size (in LBAs): 131072 (0GiB) 00:14:46.984 Capacity (in LBAs): 131072 (0GiB) 00:14:46.984 Utilization (in LBAs): 131072 (0GiB) 00:14:46.984 NGUID: A117B88A09D74ABA8F0511543D42D73F 00:14:46.984 UUID: a117b88a-09d7-4aba-8f05-11543d42d73f 00:14:46.984 Thin Provisioning: Not Supported 00:14:46.984 Per-NS Atomic Units: Yes 00:14:46.984 Atomic Boundary Size (Normal): 0 00:14:46.984 Atomic Boundary Size (PFail): 0 00:14:46.984 Atomic Boundary Offset: 0 00:14:46.984 Maximum Single Source Range Length: 65535 00:14:46.984 Maximum Copy Length: 65535 00:14:46.984 Maximum Source Range Count: 1 00:14:46.984 NGUID/EUI64 Never Reused: No 00:14:46.984 Namespace Write Protected: No 00:14:46.984 Number of LBA Formats: 1 00:14:46.984 Current LBA Format: LBA Format #00 00:14:46.984 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:46.984 00:14:46.984 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:46.984 [2024-11-26 19:16:10.089588] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.254 Initializing NVMe Controllers 00:14:52.254 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.254 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:52.254 Initialization complete. Launching workers. 00:14:52.254 ======================================================== 00:14:52.254 Latency(us) 00:14:52.254 Device Information : IOPS MiB/s Average min max 00:14:52.254 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40004.40 156.27 3202.12 941.14 8631.62 00:14:52.254 ======================================================== 00:14:52.254 Total : 40004.40 156.27 3202.12 941.14 8631.62 00:14:52.254 00:14:52.254 [2024-11-26 19:16:15.110822] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.254 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:52.254 [2024-11-26 19:16:15.344907] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.523 Initializing NVMe Controllers 00:14:57.523 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.523 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:57.523 Initialization complete. Launching workers. 00:14:57.523 ======================================================== 00:14:57.523 Latency(us) 00:14:57.523 Device Information : IOPS MiB/s Average min max 00:14:57.523 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16044.40 62.67 7987.96 4986.17 15441.32 00:14:57.523 ======================================================== 00:14:57.523 Total : 16044.40 62.67 7987.96 4986.17 15441.32 00:14:57.523 00:14:57.523 [2024-11-26 19:16:20.382400] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.523 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:57.523 [2024-11-26 19:16:20.586363] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.790 [2024-11-26 19:16:25.649928] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.790 Initializing NVMe Controllers 00:15:02.790 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.790 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:02.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:02.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:02.790 Initialization complete. Launching workers. 00:15:02.790 Starting thread on core 2 00:15:02.790 Starting thread on core 3 00:15:02.790 Starting thread on core 1 00:15:02.790 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:03.046 [2024-11-26 19:16:25.942072] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.327 [2024-11-26 19:16:28.995863] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.327 Initializing NVMe Controllers 00:15:06.327 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.327 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.327 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:06.327 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:06.327 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:06.327 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:06.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:06.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:06.327 Initialization complete. Launching workers. 00:15:06.327 Starting thread on core 1 with urgent priority queue 00:15:06.327 Starting thread on core 2 with urgent priority queue 00:15:06.327 Starting thread on core 3 with urgent priority queue 00:15:06.327 Starting thread on core 0 with urgent priority queue 00:15:06.327 SPDK bdev Controller (SPDK1 ) core 0: 6296.67 IO/s 15.88 secs/100000 ios 00:15:06.327 SPDK bdev Controller (SPDK1 ) core 1: 6285.33 IO/s 15.91 secs/100000 ios 00:15:06.327 SPDK bdev Controller (SPDK1 ) core 2: 7200.67 IO/s 13.89 secs/100000 ios 00:15:06.327 SPDK bdev Controller (SPDK1 ) core 3: 7522.00 IO/s 13.29 secs/100000 ios 00:15:06.327 ======================================================== 00:15:06.327 00:15:06.327 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:06.327 [2024-11-26 19:16:29.290167] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.327 Initializing NVMe Controllers 00:15:06.327 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.327 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.327 Namespace ID: 1 size: 0GB 00:15:06.327 Initialization complete. 00:15:06.327 INFO: using host memory buffer for IO 00:15:06.327 Hello world! 00:15:06.327 [2024-11-26 19:16:29.324390] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.327 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:06.585 [2024-11-26 19:16:29.609744] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.522 Initializing NVMe Controllers 00:15:07.522 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.522 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.522 Initialization complete. Launching workers. 00:15:07.522 submit (in ns) avg, min, max = 6654.8, 3198.1, 4008644.8 00:15:07.522 complete (in ns) avg, min, max = 20643.0, 1717.1, 7987150.5 00:15:07.522 00:15:07.522 Submit histogram 00:15:07.522 ================ 00:15:07.522 Range in us Cumulative Count 00:15:07.522 3.185 - 3.200: 0.0121% ( 2) 00:15:07.522 3.200 - 3.215: 0.2970% ( 47) 00:15:07.522 3.215 - 3.230: 1.7701% ( 243) 00:15:07.522 3.230 - 3.246: 5.5044% ( 616) 00:15:07.522 3.246 - 3.261: 9.4508% ( 651) 00:15:07.522 3.261 - 3.276: 13.7427% ( 708) 00:15:07.522 3.276 - 3.291: 19.3016% ( 917) 00:15:07.523 3.291 - 3.307: 25.1031% ( 957) 00:15:07.523 3.307 - 3.322: 31.0863% ( 987) 00:15:07.523 3.322 - 3.337: 37.4879% ( 1056) 00:15:07.523 3.337 - 3.352: 43.4530% ( 984) 00:15:07.523 3.352 - 3.368: 48.7391% ( 872) 00:15:07.523 3.368 - 3.383: 55.5832% ( 1129) 00:15:07.523 3.383 - 3.398: 62.3788% ( 1121) 00:15:07.523 3.398 - 3.413: 67.3618% ( 822) 00:15:07.523 3.413 - 3.429: 71.9629% ( 759) 00:15:07.523 3.429 - 3.444: 76.4852% ( 746) 00:15:07.523 3.444 - 3.459: 80.0255% ( 584) 00:15:07.523 3.459 - 3.474: 82.4079% ( 393) 00:15:07.523 3.474 - 3.490: 84.1234% ( 283) 00:15:07.523 3.490 - 3.505: 85.3055% ( 195) 00:15:07.523 3.505 - 3.520: 86.2633% ( 158) 00:15:07.523 3.520 - 3.535: 87.0514% ( 130) 00:15:07.523 3.535 - 3.550: 87.7910% ( 122) 00:15:07.523 3.550 - 3.566: 88.7306% ( 155) 00:15:07.523 3.566 - 3.581: 89.5793% ( 140) 00:15:07.523 3.581 - 3.596: 90.4219% ( 139) 00:15:07.523 3.596 - 3.611: 91.3858% ( 159) 00:15:07.523 3.611 - 3.627: 92.4467% ( 175) 00:15:07.523 3.627 - 3.642: 93.4772% ( 170) 00:15:07.523 3.642 - 3.657: 94.4714% ( 164) 00:15:07.523 3.657 - 3.672: 95.2716% ( 132) 00:15:07.523 3.672 - 3.688: 96.0597% ( 130) 00:15:07.523 3.688 - 3.703: 96.7507% ( 114) 00:15:07.523 3.703 - 3.718: 97.2236% ( 78) 00:15:07.523 3.718 - 3.733: 97.6176% ( 65) 00:15:07.523 3.733 - 3.749: 97.9571% ( 56) 00:15:07.523 3.749 - 3.764: 98.1996% ( 40) 00:15:07.523 3.764 - 3.779: 98.3996% ( 33) 00:15:07.523 3.779 - 3.794: 98.5512% ( 25) 00:15:07.523 3.794 - 3.810: 98.7209% ( 28) 00:15:07.523 3.810 - 3.825: 98.8058% ( 14) 00:15:07.523 3.825 - 3.840: 98.8543% ( 8) 00:15:07.523 3.840 - 3.855: 98.9028% ( 8) 00:15:07.523 3.855 - 3.870: 98.9391% ( 6) 00:15:07.523 3.870 - 3.886: 98.9634% ( 4) 00:15:07.523 3.886 - 3.901: 99.0179% ( 9) 00:15:07.523 3.901 - 3.931: 99.0664% ( 8) 00:15:07.523 3.931 - 3.962: 99.1149% ( 8) 00:15:07.523 3.962 - 3.992: 99.1877% ( 12) 00:15:07.523 3.992 - 4.023: 99.2180% ( 5) 00:15:07.523 4.023 - 4.053: 99.2362% ( 3) 00:15:07.523 4.053 - 4.084: 99.2665% ( 5) 00:15:07.523 4.084 - 4.114: 99.2726% ( 1) 00:15:07.523 4.114 - 4.145: 99.3210% ( 8) 00:15:07.523 4.145 - 4.175: 99.3392% ( 3) 00:15:07.523 4.175 - 4.206: 99.3817% ( 7) 00:15:07.523 4.206 - 4.236: 99.3938% ( 2) 00:15:07.523 4.267 - 4.297: 99.4120% ( 3) 00:15:07.523 4.297 - 4.328: 99.4180% ( 1) 00:15:07.523 4.328 - 4.358: 99.4241% ( 1) 00:15:07.523 4.358 - 4.389: 99.4302% ( 1) 00:15:07.523 4.419 - 4.450: 99.4362% ( 1) 00:15:07.523 4.450 - 4.480: 99.4484% ( 2) 00:15:07.523 4.480 - 4.510: 99.4544% ( 1) 00:15:07.523 4.510 - 4.541: 99.4605% ( 1) 00:15:07.523 4.541 - 4.571: 99.4665% ( 1) 00:15:07.523 4.663 - 4.693: 99.4787% ( 2) 00:15:07.523 4.724 - 4.754: 99.4847% ( 1) 00:15:07.523 4.876 - 4.907: 99.4908% ( 1) 00:15:07.523 4.907 - 4.937: 99.4968% ( 1) 00:15:07.523 4.937 - 4.968: 99.5029% ( 1) 00:15:07.523 4.968 - 4.998: 99.5090% ( 1) 00:15:07.523 4.998 - 5.029: 99.5211% ( 2) 00:15:07.523 5.303 - 5.333: 99.5272% ( 1) 00:15:07.523 5.364 - 5.394: 99.5393% ( 2) 00:15:07.523 5.394 - 5.425: 99.5575% ( 3) 00:15:07.523 5.455 - 5.486: 99.5635% ( 1) 00:15:07.523 5.516 - 5.547: 99.5757% ( 2) 00:15:07.523 5.577 - 5.608: 99.5817% ( 1) 00:15:07.523 5.638 - 5.669: 99.6060% ( 4) 00:15:07.523 5.669 - 5.699: 99.6120% ( 1) 00:15:07.523 5.730 - 5.760: 99.6242% ( 2) 00:15:07.523 5.760 - 5.790: 99.6302% ( 1) 00:15:07.523 5.790 - 5.821: 99.6484% ( 3) 00:15:07.523 5.851 - 5.882: 99.6545% ( 1) 00:15:07.523 5.882 - 5.912: 99.6605% ( 1) 00:15:07.523 5.943 - 5.973: 99.6726% ( 2) 00:15:07.523 6.095 - 6.126: 99.6787% ( 1) 00:15:07.523 6.126 - 6.156: 99.6848% ( 1) 00:15:07.523 6.217 - 6.248: 99.6908% ( 1) 00:15:07.523 6.248 - 6.278: 99.6969% ( 1) 00:15:07.523 6.278 - 6.309: 99.7030% ( 1) 00:15:07.523 6.309 - 6.339: 99.7090% ( 1) 00:15:07.523 6.370 - 6.400: 99.7211% ( 2) 00:15:07.523 6.400 - 6.430: 99.7272% ( 1) 00:15:07.523 6.522 - 6.552: 99.7333% ( 1) 00:15:07.523 6.583 - 6.613: 99.7393% ( 1) 00:15:07.523 6.674 - 6.705: 99.7454% ( 1) 00:15:07.523 6.705 - 6.735: 99.7515% ( 1) 00:15:07.523 6.766 - 6.796: 99.7575% ( 1) 00:15:07.523 7.040 - 7.070: 99.7636% ( 1) 00:15:07.523 7.131 - 7.162: 99.7696% ( 1) 00:15:07.523 7.223 - 7.253: 99.7757% ( 1) 00:15:07.523 7.314 - 7.345: 99.7818% ( 1) 00:15:07.523 7.345 - 7.375: 99.7878% ( 1) 00:15:07.523 7.375 - 7.406: 99.7939% ( 1) 00:15:07.523 7.406 - 7.436: 99.8000% ( 1) 00:15:07.523 7.436 - 7.467: 99.8060% ( 1) 00:15:07.523 7.558 - 7.589: 99.8121% ( 1) 00:15:07.523 7.589 - 7.619: 99.8181% ( 1) 00:15:07.523 7.802 - 7.863: 99.8303% ( 2) 00:15:07.523 7.863 - 7.924: 99.8363% ( 1) 00:15:07.523 8.046 - 8.107: 99.8424% ( 1) 00:15:07.523 8.533 - 8.594: 99.8484% ( 1) 00:15:07.523 8.716 - 8.777: 99.8545% ( 1) 00:15:07.523 9.082 - 9.143: 99.8606% ( 1) 00:15:07.523 9.630 - 9.691: 99.8666% ( 1) 00:15:07.523 9.935 - 9.996: 99.8727% ( 1) 00:15:07.523 15.177 - 15.238: 99.8788% ( 1) 00:15:07.523 15.482 - 15.543: 99.8848% ( 1) 00:15:07.523 15.726 - 15.848: 99.8909% ( 1) 00:15:07.523 18.164 - 18.286: 99.8969% ( 1) 00:15:07.523 18.773 - 18.895: 99.9030% ( 1) 00:15:07.523 19.261 - 19.383: 99.9091% ( 1) 00:15:07.523 19.383 - 19.505: 99.9151% ( 1) 00:15:07.523 1583.787 - 1591.589: 99.9212% ( 1) 00:15:07.523 3994.575 - 4025.783: 100.0000% ( 13) 00:15:07.523 00:15:07.523 Complete histogram 00:15:07.523 ================== 00:15:07.523 Range in us Cumulative Count 00:15:07.523 1.714 - 1.722: 0.0061% ( 1) 00:15:07.523 1.722 - 1.730: 0.0485% ( 7) 00:15:07.523 1.730 - 1.737: 0.0970% ( 8) 00:15:07.523 1.737 - 1.745: 0.1212% ( 4) 00:15:07.523 1.752 - 1.760: 0.1697% ( 8) 00:15:07.523 1.760 - 1.768: 1.6974% ( 252) 00:15:07.523 1.768 - 1.775: 15.1734% ( 2223) 00:15:07.523 1.775 - 1.783: 43.1256% ( 4611) 00:15:07.523 1.783 - 1.790: 62.0757% ( 3126) 00:15:07.523 1.790 - 1.798: 67.5436% ( 902) 00:15:07.523 1.798 - 1.806: 71.2355% ( 609) 00:15:07.523 1.806 - 1.813: 73.7512% ( 415) 00:15:07.523 1.813 - 1.821: 74.7878% ( 171) 00:15:07.523 1.821 - 1.829: 76.3822% ( 263) 00:15:07.523 1.829 - 1.836: 81.3167% ( 814) 00:15:07.523 1.836 - 1.844: 87.3424% ( 994) 00:15:07.523 1.844 - 1.851: 91.0039% ( 604) 00:15:07.523 1.851 - 1.859: 93.2105% ( 364) 00:15:07.523 1.859 - 1.867: 94.9079% ( 280) 00:15:07.523 1.867 - 1.874: 95.7020% ( 131) 00:15:07.523 1.874 - 1.882: 96.0293% ( 54) 00:15:07.523 1.882 - 1.890: 96.4961% ( 77) 00:15:07.523 1.890 - 1.897: 97.0781% ( 96) 00:15:07.523 1.897 - 1.905: 97.3994% ( 53) 00:15:07.523 1.905 - 1.912: 97.6479% ( 41) 00:15:07.523 1.912 - 1.920: 97.7510% ( 17) 00:15:07.523 1.920 - 1.928: 97.8419% ( 15) 00:15:07.523 1.928 - 1.935: 97.8722% ( 5) 00:15:07.523 1.935 - 1.943: 97.9328% ( 10) 00:15:07.523 1.943 - 1.950: 98.0177% ( 14) 00:15:07.523 1.950 - 1.966: 98.1874% ( 28) 00:15:07.524 1.966 - 1.981: 98.3026% ( 19) 00:15:07.524 1.981 - 1.996: 98.3269% ( 4) 00:15:07.524 1.996 - 2.011: 98.3451% ( 3) 00:15:07.524 2.011 - 2.027: 98.5027% ( 26) 00:15:07.524 2.027 - 2.042: 98.5754% ( 12) 00:15:07.524 2.042 - 2.0[2024-11-26 19:16:30.626715] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.783 57: 98.5875% ( 2) 00:15:07.783 2.057 - 2.072: 98.6178% ( 5) 00:15:07.783 2.072 - 2.088: 98.6906% ( 12) 00:15:07.783 2.088 - 2.103: 98.7391% ( 8) 00:15:07.783 2.103 - 2.118: 98.7452% ( 1) 00:15:07.783 2.133 - 2.149: 98.7573% ( 2) 00:15:07.783 2.149 - 2.164: 98.8664% ( 18) 00:15:07.783 2.164 - 2.179: 98.9573% ( 15) 00:15:07.783 2.179 - 2.194: 98.9998% ( 7) 00:15:07.783 2.194 - 2.210: 99.0240% ( 4) 00:15:07.783 2.210 - 2.225: 99.0422% ( 3) 00:15:07.783 2.225 - 2.240: 99.0664% ( 4) 00:15:07.783 2.240 - 2.255: 99.0968% ( 5) 00:15:07.783 2.255 - 2.270: 99.1089% ( 2) 00:15:07.783 2.270 - 2.286: 99.1271% ( 3) 00:15:07.783 2.286 - 2.301: 99.1392% ( 2) 00:15:07.783 2.301 - 2.316: 99.1452% ( 1) 00:15:07.783 2.331 - 2.347: 99.1574% ( 2) 00:15:07.783 2.347 - 2.362: 99.1816% ( 4) 00:15:07.783 2.362 - 2.377: 99.2059% ( 4) 00:15:07.783 2.392 - 2.408: 99.2180% ( 2) 00:15:07.783 2.408 - 2.423: 99.2362% ( 3) 00:15:07.783 2.423 - 2.438: 99.2422% ( 1) 00:15:07.783 2.438 - 2.453: 99.2483% ( 1) 00:15:07.783 2.590 - 2.606: 99.2544% ( 1) 00:15:07.783 2.636 - 2.651: 99.2604% ( 1) 00:15:07.783 2.682 - 2.697: 99.2665% ( 1) 00:15:07.783 2.728 - 2.743: 99.2726% ( 1) 00:15:07.783 2.773 - 2.789: 99.2786% ( 1) 00:15:07.783 2.880 - 2.895: 99.2907% ( 2) 00:15:07.783 2.926 - 2.941: 99.2968% ( 1) 00:15:07.783 3.002 - 3.017: 99.3029% ( 1) 00:15:07.783 3.017 - 3.032: 99.3089% ( 1) 00:15:07.783 3.490 - 3.505: 99.3150% ( 1) 00:15:07.783 3.611 - 3.627: 99.3210% ( 1) 00:15:07.783 3.733 - 3.749: 99.3271% ( 1) 00:15:07.783 3.901 - 3.931: 99.3332% ( 1) 00:15:07.783 3.992 - 4.023: 99.3392% ( 1) 00:15:07.783 4.114 - 4.145: 99.3453% ( 1) 00:15:07.783 4.145 - 4.175: 99.3635% ( 3) 00:15:07.783 4.267 - 4.297: 99.3756% ( 2) 00:15:07.783 4.297 - 4.328: 99.3817% ( 1) 00:15:07.783 4.389 - 4.419: 99.3877% ( 1) 00:15:07.783 4.815 - 4.846: 99.3938% ( 1) 00:15:07.783 4.876 - 4.907: 99.3999% ( 1) 00:15:07.783 5.120 - 5.150: 99.4120% ( 2) 00:15:07.783 5.211 - 5.242: 99.4180% ( 1) 00:15:07.783 5.242 - 5.272: 99.4241% ( 1) 00:15:07.783 5.303 - 5.333: 99.4302% ( 1) 00:15:07.783 5.364 - 5.394: 99.4362% ( 1) 00:15:07.783 5.547 - 5.577: 99.4423% ( 1) 00:15:07.783 5.608 - 5.638: 99.4484% ( 1) 00:15:07.783 5.669 - 5.699: 99.4544% ( 1) 00:15:07.783 5.730 - 5.760: 99.4605% ( 1) 00:15:07.783 5.912 - 5.943: 99.4665% ( 1) 00:15:07.783 6.339 - 6.370: 99.4726% ( 1) 00:15:07.783 6.430 - 6.461: 99.4787% ( 1) 00:15:07.783 7.924 - 7.985: 99.4847% ( 1) 00:15:07.783 8.290 - 8.350: 99.4908% ( 1) 00:15:07.783 8.716 - 8.777: 99.4968% ( 1) 00:15:07.783 9.448 - 9.509: 99.5029% ( 1) 00:15:07.783 10.301 - 10.362: 99.5090% ( 1) 00:15:07.783 11.764 - 11.825: 99.5150% ( 1) 00:15:07.783 24.869 - 24.990: 99.5211% ( 1) 00:15:07.783 28.526 - 28.648: 99.5272% ( 1) 00:15:07.783 29.013 - 29.135: 99.5332% ( 1) 00:15:07.783 39.253 - 39.497: 99.5393% ( 1) 00:15:07.783 3978.971 - 3994.575: 99.5575% ( 3) 00:15:07.783 3994.575 - 4025.783: 99.9879% ( 71) 00:15:07.783 6959.299 - 6990.507: 99.9939% ( 1) 00:15:07.783 7957.943 - 7989.150: 100.0000% ( 1) 00:15:07.783 00:15:07.783 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:07.783 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:07.783 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:07.783 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:07.783 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:07.783 [ 00:15:07.783 { 00:15:07.783 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:07.783 "subtype": "Discovery", 00:15:07.783 "listen_addresses": [], 00:15:07.783 "allow_any_host": true, 00:15:07.783 "hosts": [] 00:15:07.783 }, 00:15:07.783 { 00:15:07.783 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:07.783 "subtype": "NVMe", 00:15:07.783 "listen_addresses": [ 00:15:07.783 { 00:15:07.783 "trtype": "VFIOUSER", 00:15:07.783 "adrfam": "IPv4", 00:15:07.783 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:07.783 "trsvcid": "0" 00:15:07.783 } 00:15:07.783 ], 00:15:07.783 "allow_any_host": true, 00:15:07.783 "hosts": [], 00:15:07.783 "serial_number": "SPDK1", 00:15:07.783 "model_number": "SPDK bdev Controller", 00:15:07.783 "max_namespaces": 32, 00:15:07.783 "min_cntlid": 1, 00:15:07.783 "max_cntlid": 65519, 00:15:07.783 "namespaces": [ 00:15:07.783 { 00:15:07.783 "nsid": 1, 00:15:07.783 "bdev_name": "Malloc1", 00:15:07.783 "name": "Malloc1", 00:15:07.783 "nguid": "A117B88A09D74ABA8F0511543D42D73F", 00:15:07.783 "uuid": "a117b88a-09d7-4aba-8f05-11543d42d73f" 00:15:07.783 } 00:15:07.783 ] 00:15:07.783 }, 00:15:07.783 { 00:15:07.783 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:07.783 "subtype": "NVMe", 00:15:07.783 "listen_addresses": [ 00:15:07.783 { 00:15:07.783 "trtype": "VFIOUSER", 00:15:07.783 "adrfam": "IPv4", 00:15:07.783 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:07.783 "trsvcid": "0" 00:15:07.783 } 00:15:07.784 ], 00:15:07.784 "allow_any_host": true, 00:15:07.784 "hosts": [], 00:15:07.784 "serial_number": "SPDK2", 00:15:07.784 "model_number": "SPDK bdev Controller", 00:15:07.784 "max_namespaces": 32, 00:15:07.784 "min_cntlid": 1, 00:15:07.784 "max_cntlid": 65519, 00:15:07.784 "namespaces": [ 00:15:07.784 { 00:15:07.784 "nsid": 1, 00:15:07.784 "bdev_name": "Malloc2", 00:15:07.784 "name": "Malloc2", 00:15:07.784 "nguid": "971D8E3E8FE643AEA00D18B47793871A", 00:15:07.784 "uuid": "971d8e3e-8fe6-43ae-a00d-18b47793871a" 00:15:07.784 } 00:15:07.784 ] 00:15:07.784 } 00:15:07.784 ] 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3709553 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:07.784 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:08.042 [2024-11-26 19:16:31.014102] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.042 Malloc3 00:15:08.042 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:08.301 [2024-11-26 19:16:31.258864] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.301 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.301 Asynchronous Event Request test 00:15:08.301 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:08.301 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:08.301 Registering asynchronous event callbacks... 00:15:08.301 Starting namespace attribute notice tests for all controllers... 00:15:08.301 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:08.301 aer_cb - Changed Namespace 00:15:08.301 Cleaning up... 00:15:08.560 [ 00:15:08.560 { 00:15:08.560 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.560 "subtype": "Discovery", 00:15:08.560 "listen_addresses": [], 00:15:08.560 "allow_any_host": true, 00:15:08.560 "hosts": [] 00:15:08.560 }, 00:15:08.560 { 00:15:08.560 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:08.560 "subtype": "NVMe", 00:15:08.560 "listen_addresses": [ 00:15:08.560 { 00:15:08.560 "trtype": "VFIOUSER", 00:15:08.560 "adrfam": "IPv4", 00:15:08.560 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:08.560 "trsvcid": "0" 00:15:08.560 } 00:15:08.560 ], 00:15:08.560 "allow_any_host": true, 00:15:08.560 "hosts": [], 00:15:08.560 "serial_number": "SPDK1", 00:15:08.560 "model_number": "SPDK bdev Controller", 00:15:08.560 "max_namespaces": 32, 00:15:08.560 "min_cntlid": 1, 00:15:08.560 "max_cntlid": 65519, 00:15:08.560 "namespaces": [ 00:15:08.560 { 00:15:08.560 "nsid": 1, 00:15:08.560 "bdev_name": "Malloc1", 00:15:08.560 "name": "Malloc1", 00:15:08.560 "nguid": "A117B88A09D74ABA8F0511543D42D73F", 00:15:08.560 "uuid": "a117b88a-09d7-4aba-8f05-11543d42d73f" 00:15:08.560 }, 00:15:08.560 { 00:15:08.560 "nsid": 2, 00:15:08.560 "bdev_name": "Malloc3", 00:15:08.560 "name": "Malloc3", 00:15:08.560 "nguid": "7688712EF268451097299DF31BC248EA", 00:15:08.560 "uuid": "7688712e-f268-4510-9729-9df31bc248ea" 00:15:08.560 } 00:15:08.560 ] 00:15:08.560 }, 00:15:08.560 { 00:15:08.560 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:08.560 "subtype": "NVMe", 00:15:08.560 "listen_addresses": [ 00:15:08.560 { 00:15:08.560 "trtype": "VFIOUSER", 00:15:08.560 "adrfam": "IPv4", 00:15:08.560 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:08.560 "trsvcid": "0" 00:15:08.560 } 00:15:08.560 ], 00:15:08.560 "allow_any_host": true, 00:15:08.560 "hosts": [], 00:15:08.560 "serial_number": "SPDK2", 00:15:08.560 "model_number": "SPDK bdev Controller", 00:15:08.560 "max_namespaces": 32, 00:15:08.560 "min_cntlid": 1, 00:15:08.560 "max_cntlid": 65519, 00:15:08.560 "namespaces": [ 00:15:08.560 { 00:15:08.560 "nsid": 1, 00:15:08.560 "bdev_name": "Malloc2", 00:15:08.560 "name": "Malloc2", 00:15:08.560 "nguid": "971D8E3E8FE643AEA00D18B47793871A", 00:15:08.560 "uuid": "971d8e3e-8fe6-43ae-a00d-18b47793871a" 00:15:08.560 } 00:15:08.560 ] 00:15:08.560 } 00:15:08.560 ] 00:15:08.560 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3709553 00:15:08.560 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:08.560 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:08.560 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:08.560 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:08.560 [2024-11-26 19:16:31.507190] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:15:08.560 [2024-11-26 19:16:31.507235] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709566 ] 00:15:08.560 [2024-11-26 19:16:31.545155] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:08.560 [2024-11-26 19:16:31.557977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.560 [2024-11-26 19:16:31.558002] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f64b4bdf000 00:15:08.560 [2024-11-26 19:16:31.558977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.560 [2024-11-26 19:16:31.559991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.560 [2024-11-26 19:16:31.560998] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.560 [2024-11-26 19:16:31.562003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.560 [2024-11-26 19:16:31.563005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.560 [2024-11-26 19:16:31.564011] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.560 [2024-11-26 19:16:31.565025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.560 [2024-11-26 19:16:31.566030] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.560 [2024-11-26 19:16:31.567042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.560 [2024-11-26 19:16:31.567052] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f64b4bd4000 00:15:08.560 [2024-11-26 19:16:31.567965] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.560 [2024-11-26 19:16:31.577316] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:08.560 [2024-11-26 19:16:31.577345] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:08.560 [2024-11-26 19:16:31.582421] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:08.560 [2024-11-26 19:16:31.582458] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:08.560 [2024-11-26 19:16:31.582526] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:08.560 [2024-11-26 19:16:31.582539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:08.560 [2024-11-26 19:16:31.582543] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:08.560 [2024-11-26 19:16:31.583421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:08.560 [2024-11-26 19:16:31.583431] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:08.560 [2024-11-26 19:16:31.583438] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:08.560 [2024-11-26 19:16:31.584428] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:08.560 [2024-11-26 19:16:31.584436] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:08.561 [2024-11-26 19:16:31.584442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:08.561 [2024-11-26 19:16:31.585436] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:08.561 [2024-11-26 19:16:31.585445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:08.561 [2024-11-26 19:16:31.586439] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:08.561 [2024-11-26 19:16:31.586448] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:08.561 [2024-11-26 19:16:31.586452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:08.561 [2024-11-26 19:16:31.586458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:08.561 [2024-11-26 19:16:31.586568] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:08.561 [2024-11-26 19:16:31.586573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:08.561 [2024-11-26 19:16:31.586577] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:08.561 [2024-11-26 19:16:31.587451] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:08.561 [2024-11-26 19:16:31.588454] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:08.561 [2024-11-26 19:16:31.589459] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:08.561 [2024-11-26 19:16:31.590460] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.561 [2024-11-26 19:16:31.590501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:08.561 [2024-11-26 19:16:31.591478] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:08.561 [2024-11-26 19:16:31.591488] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:08.561 [2024-11-26 19:16:31.591493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.591511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:08.561 [2024-11-26 19:16:31.591519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.591533] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.561 [2024-11-26 19:16:31.591539] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.561 [2024-11-26 19:16:31.591543] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.561 [2024-11-26 19:16:31.591553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.561 [2024-11-26 19:16:31.598677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:08.561 [2024-11-26 19:16:31.598689] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:08.561 [2024-11-26 19:16:31.598694] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:08.561 [2024-11-26 19:16:31.598698] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:08.561 [2024-11-26 19:16:31.598702] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:08.561 [2024-11-26 19:16:31.598706] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:08.561 [2024-11-26 19:16:31.598710] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:08.561 [2024-11-26 19:16:31.598715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.598721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.598732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:08.561 [2024-11-26 19:16:31.606675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:08.561 [2024-11-26 19:16:31.606687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.561 [2024-11-26 19:16:31.606695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.561 [2024-11-26 19:16:31.606702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.561 [2024-11-26 19:16:31.606710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.561 [2024-11-26 19:16:31.606714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.606723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.606732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:08.561 [2024-11-26 19:16:31.614676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:08.561 [2024-11-26 19:16:31.614685] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:08.561 [2024-11-26 19:16:31.614690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.614700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.614706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.614714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.561 [2024-11-26 19:16:31.622675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:08.561 [2024-11-26 19:16:31.622729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.622739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:08.561 [2024-11-26 19:16:31.622747] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:08.561 [2024-11-26 19:16:31.622752] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:08.561 [2024-11-26 19:16:31.622757] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.561 [2024-11-26 19:16:31.622762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:08.562 [2024-11-26 19:16:31.630677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:08.562 [2024-11-26 19:16:31.630691] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:08.562 [2024-11-26 19:16:31.630698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.630706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.630713] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.562 [2024-11-26 19:16:31.630717] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.562 [2024-11-26 19:16:31.630720] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.562 [2024-11-26 19:16:31.630725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.562 [2024-11-26 19:16:31.638677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:08.562 [2024-11-26 19:16:31.638692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.638700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.638706] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.562 [2024-11-26 19:16:31.638710] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.562 [2024-11-26 19:16:31.638713] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.562 [2024-11-26 19:16:31.638719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.562 [2024-11-26 19:16:31.646675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:08.562 [2024-11-26 19:16:31.646688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.646694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.646701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.646707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.646712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.646717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.646721] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:08.562 [2024-11-26 19:16:31.646725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:08.562 [2024-11-26 19:16:31.646730] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:08.562 [2024-11-26 19:16:31.646745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:08.562 [2024-11-26 19:16:31.654677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:08.562 [2024-11-26 19:16:31.654689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:08.562 [2024-11-26 19:16:31.662675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:08.562 [2024-11-26 19:16:31.662687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:08.562 [2024-11-26 19:16:31.670676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:08.562 [2024-11-26 19:16:31.670688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.822 [2024-11-26 19:16:31.678677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:08.822 [2024-11-26 19:16:31.678693] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:08.822 [2024-11-26 19:16:31.678698] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:08.822 [2024-11-26 19:16:31.678701] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:08.822 [2024-11-26 19:16:31.678704] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:08.822 [2024-11-26 19:16:31.678707] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:08.822 [2024-11-26 19:16:31.678713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:08.822 [2024-11-26 19:16:31.678719] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:08.822 [2024-11-26 19:16:31.678723] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:08.822 [2024-11-26 19:16:31.678727] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.822 [2024-11-26 19:16:31.678732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:08.822 [2024-11-26 19:16:31.678738] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:08.822 [2024-11-26 19:16:31.678742] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.822 [2024-11-26 19:16:31.678745] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.822 [2024-11-26 19:16:31.678750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.822 [2024-11-26 19:16:31.678757] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:08.822 [2024-11-26 19:16:31.678761] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:08.822 [2024-11-26 19:16:31.678764] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.822 [2024-11-26 19:16:31.678769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:08.822 [2024-11-26 19:16:31.686677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:08.822 [2024-11-26 19:16:31.686691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:08.822 [2024-11-26 19:16:31.686701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:08.822 [2024-11-26 19:16:31.686707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:08.822 ===================================================== 00:15:08.822 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.822 ===================================================== 00:15:08.822 Controller Capabilities/Features 00:15:08.823 ================================ 00:15:08.823 Vendor ID: 4e58 00:15:08.823 Subsystem Vendor ID: 4e58 00:15:08.823 Serial Number: SPDK2 00:15:08.823 Model Number: SPDK bdev Controller 00:15:08.823 Firmware Version: 25.01 00:15:08.823 Recommended Arb Burst: 6 00:15:08.823 IEEE OUI Identifier: 8d 6b 50 00:15:08.823 Multi-path I/O 00:15:08.823 May have multiple subsystem ports: Yes 00:15:08.823 May have multiple controllers: Yes 00:15:08.823 Associated with SR-IOV VF: No 00:15:08.823 Max Data Transfer Size: 131072 00:15:08.823 Max Number of Namespaces: 32 00:15:08.823 Max Number of I/O Queues: 127 00:15:08.823 NVMe Specification Version (VS): 1.3 00:15:08.823 NVMe Specification Version (Identify): 1.3 00:15:08.823 Maximum Queue Entries: 256 00:15:08.823 Contiguous Queues Required: Yes 00:15:08.823 Arbitration Mechanisms Supported 00:15:08.823 Weighted Round Robin: Not Supported 00:15:08.823 Vendor Specific: Not Supported 00:15:08.823 Reset Timeout: 15000 ms 00:15:08.823 Doorbell Stride: 4 bytes 00:15:08.823 NVM Subsystem Reset: Not Supported 00:15:08.823 Command Sets Supported 00:15:08.823 NVM Command Set: Supported 00:15:08.823 Boot Partition: Not Supported 00:15:08.823 Memory Page Size Minimum: 4096 bytes 00:15:08.823 Memory Page Size Maximum: 4096 bytes 00:15:08.823 Persistent Memory Region: Not Supported 00:15:08.823 Optional Asynchronous Events Supported 00:15:08.823 Namespace Attribute Notices: Supported 00:15:08.823 Firmware Activation Notices: Not Supported 00:15:08.823 ANA Change Notices: Not Supported 00:15:08.823 PLE Aggregate Log Change Notices: Not Supported 00:15:08.823 LBA Status Info Alert Notices: Not Supported 00:15:08.823 EGE Aggregate Log Change Notices: Not Supported 00:15:08.823 Normal NVM Subsystem Shutdown event: Not Supported 00:15:08.823 Zone Descriptor Change Notices: Not Supported 00:15:08.823 Discovery Log Change Notices: Not Supported 00:15:08.823 Controller Attributes 00:15:08.823 128-bit Host Identifier: Supported 00:15:08.823 Non-Operational Permissive Mode: Not Supported 00:15:08.823 NVM Sets: Not Supported 00:15:08.823 Read Recovery Levels: Not Supported 00:15:08.823 Endurance Groups: Not Supported 00:15:08.823 Predictable Latency Mode: Not Supported 00:15:08.823 Traffic Based Keep ALive: Not Supported 00:15:08.823 Namespace Granularity: Not Supported 00:15:08.823 SQ Associations: Not Supported 00:15:08.823 UUID List: Not Supported 00:15:08.823 Multi-Domain Subsystem: Not Supported 00:15:08.823 Fixed Capacity Management: Not Supported 00:15:08.823 Variable Capacity Management: Not Supported 00:15:08.823 Delete Endurance Group: Not Supported 00:15:08.823 Delete NVM Set: Not Supported 00:15:08.823 Extended LBA Formats Supported: Not Supported 00:15:08.823 Flexible Data Placement Supported: Not Supported 00:15:08.823 00:15:08.823 Controller Memory Buffer Support 00:15:08.823 ================================ 00:15:08.823 Supported: No 00:15:08.823 00:15:08.823 Persistent Memory Region Support 00:15:08.823 ================================ 00:15:08.823 Supported: No 00:15:08.823 00:15:08.823 Admin Command Set Attributes 00:15:08.823 ============================ 00:15:08.823 Security Send/Receive: Not Supported 00:15:08.823 Format NVM: Not Supported 00:15:08.823 Firmware Activate/Download: Not Supported 00:15:08.823 Namespace Management: Not Supported 00:15:08.823 Device Self-Test: Not Supported 00:15:08.823 Directives: Not Supported 00:15:08.823 NVMe-MI: Not Supported 00:15:08.823 Virtualization Management: Not Supported 00:15:08.823 Doorbell Buffer Config: Not Supported 00:15:08.823 Get LBA Status Capability: Not Supported 00:15:08.823 Command & Feature Lockdown Capability: Not Supported 00:15:08.823 Abort Command Limit: 4 00:15:08.823 Async Event Request Limit: 4 00:15:08.823 Number of Firmware Slots: N/A 00:15:08.823 Firmware Slot 1 Read-Only: N/A 00:15:08.823 Firmware Activation Without Reset: N/A 00:15:08.823 Multiple Update Detection Support: N/A 00:15:08.823 Firmware Update Granularity: No Information Provided 00:15:08.823 Per-Namespace SMART Log: No 00:15:08.823 Asymmetric Namespace Access Log Page: Not Supported 00:15:08.823 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:08.823 Command Effects Log Page: Supported 00:15:08.823 Get Log Page Extended Data: Supported 00:15:08.823 Telemetry Log Pages: Not Supported 00:15:08.823 Persistent Event Log Pages: Not Supported 00:15:08.823 Supported Log Pages Log Page: May Support 00:15:08.823 Commands Supported & Effects Log Page: Not Supported 00:15:08.823 Feature Identifiers & Effects Log Page:May Support 00:15:08.823 NVMe-MI Commands & Effects Log Page: May Support 00:15:08.823 Data Area 4 for Telemetry Log: Not Supported 00:15:08.823 Error Log Page Entries Supported: 128 00:15:08.823 Keep Alive: Supported 00:15:08.823 Keep Alive Granularity: 10000 ms 00:15:08.823 00:15:08.823 NVM Command Set Attributes 00:15:08.823 ========================== 00:15:08.823 Submission Queue Entry Size 00:15:08.823 Max: 64 00:15:08.823 Min: 64 00:15:08.823 Completion Queue Entry Size 00:15:08.823 Max: 16 00:15:08.823 Min: 16 00:15:08.823 Number of Namespaces: 32 00:15:08.823 Compare Command: Supported 00:15:08.823 Write Uncorrectable Command: Not Supported 00:15:08.823 Dataset Management Command: Supported 00:15:08.823 Write Zeroes Command: Supported 00:15:08.823 Set Features Save Field: Not Supported 00:15:08.823 Reservations: Not Supported 00:15:08.823 Timestamp: Not Supported 00:15:08.823 Copy: Supported 00:15:08.823 Volatile Write Cache: Present 00:15:08.823 Atomic Write Unit (Normal): 1 00:15:08.823 Atomic Write Unit (PFail): 1 00:15:08.823 Atomic Compare & Write Unit: 1 00:15:08.823 Fused Compare & Write: Supported 00:15:08.823 Scatter-Gather List 00:15:08.823 SGL Command Set: Supported (Dword aligned) 00:15:08.823 SGL Keyed: Not Supported 00:15:08.823 SGL Bit Bucket Descriptor: Not Supported 00:15:08.823 SGL Metadata Pointer: Not Supported 00:15:08.823 Oversized SGL: Not Supported 00:15:08.823 SGL Metadata Address: Not Supported 00:15:08.823 SGL Offset: Not Supported 00:15:08.823 Transport SGL Data Block: Not Supported 00:15:08.823 Replay Protected Memory Block: Not Supported 00:15:08.823 00:15:08.823 Firmware Slot Information 00:15:08.823 ========================= 00:15:08.823 Active slot: 1 00:15:08.823 Slot 1 Firmware Revision: 25.01 00:15:08.823 00:15:08.823 00:15:08.823 Commands Supported and Effects 00:15:08.823 ============================== 00:15:08.823 Admin Commands 00:15:08.823 -------------- 00:15:08.823 Get Log Page (02h): Supported 00:15:08.823 Identify (06h): Supported 00:15:08.823 Abort (08h): Supported 00:15:08.823 Set Features (09h): Supported 00:15:08.823 Get Features (0Ah): Supported 00:15:08.823 Asynchronous Event Request (0Ch): Supported 00:15:08.823 Keep Alive (18h): Supported 00:15:08.823 I/O Commands 00:15:08.823 ------------ 00:15:08.823 Flush (00h): Supported LBA-Change 00:15:08.823 Write (01h): Supported LBA-Change 00:15:08.823 Read (02h): Supported 00:15:08.823 Compare (05h): Supported 00:15:08.823 Write Zeroes (08h): Supported LBA-Change 00:15:08.823 Dataset Management (09h): Supported LBA-Change 00:15:08.823 Copy (19h): Supported LBA-Change 00:15:08.823 00:15:08.823 Error Log 00:15:08.823 ========= 00:15:08.823 00:15:08.823 Arbitration 00:15:08.823 =========== 00:15:08.823 Arbitration Burst: 1 00:15:08.823 00:15:08.823 Power Management 00:15:08.823 ================ 00:15:08.823 Number of Power States: 1 00:15:08.823 Current Power State: Power State #0 00:15:08.823 Power State #0: 00:15:08.823 Max Power: 0.00 W 00:15:08.823 Non-Operational State: Operational 00:15:08.823 Entry Latency: Not Reported 00:15:08.823 Exit Latency: Not Reported 00:15:08.823 Relative Read Throughput: 0 00:15:08.823 Relative Read Latency: 0 00:15:08.823 Relative Write Throughput: 0 00:15:08.823 Relative Write Latency: 0 00:15:08.823 Idle Power: Not Reported 00:15:08.823 Active Power: Not Reported 00:15:08.823 Non-Operational Permissive Mode: Not Supported 00:15:08.823 00:15:08.823 Health Information 00:15:08.823 ================== 00:15:08.823 Critical Warnings: 00:15:08.823 Available Spare Space: OK 00:15:08.823 Temperature: OK 00:15:08.823 Device Reliability: OK 00:15:08.823 Read Only: No 00:15:08.824 Volatile Memory Backup: OK 00:15:08.824 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:08.824 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:08.824 Available Spare: 0% 00:15:08.824 Available Sp[2024-11-26 19:16:31.686796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:08.824 [2024-11-26 19:16:31.694675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:08.824 [2024-11-26 19:16:31.694708] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:08.824 [2024-11-26 19:16:31.694717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.824 [2024-11-26 19:16:31.694723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.824 [2024-11-26 19:16:31.694728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.824 [2024-11-26 19:16:31.694734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.824 [2024-11-26 19:16:31.694786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:08.824 [2024-11-26 19:16:31.694796] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:08.824 [2024-11-26 19:16:31.695791] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.824 [2024-11-26 19:16:31.695835] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:08.824 [2024-11-26 19:16:31.695842] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:08.824 [2024-11-26 19:16:31.696790] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:08.824 [2024-11-26 19:16:31.696801] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:08.824 [2024-11-26 19:16:31.696854] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:08.824 [2024-11-26 19:16:31.697807] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.824 are Threshold: 0% 00:15:08.824 Life Percentage Used: 0% 00:15:08.824 Data Units Read: 0 00:15:08.824 Data Units Written: 0 00:15:08.824 Host Read Commands: 0 00:15:08.824 Host Write Commands: 0 00:15:08.824 Controller Busy Time: 0 minutes 00:15:08.824 Power Cycles: 0 00:15:08.824 Power On Hours: 0 hours 00:15:08.824 Unsafe Shutdowns: 0 00:15:08.824 Unrecoverable Media Errors: 0 00:15:08.824 Lifetime Error Log Entries: 0 00:15:08.824 Warning Temperature Time: 0 minutes 00:15:08.824 Critical Temperature Time: 0 minutes 00:15:08.824 00:15:08.824 Number of Queues 00:15:08.824 ================ 00:15:08.824 Number of I/O Submission Queues: 127 00:15:08.824 Number of I/O Completion Queues: 127 00:15:08.824 00:15:08.824 Active Namespaces 00:15:08.824 ================= 00:15:08.824 Namespace ID:1 00:15:08.824 Error Recovery Timeout: Unlimited 00:15:08.824 Command Set Identifier: NVM (00h) 00:15:08.824 Deallocate: Supported 00:15:08.824 Deallocated/Unwritten Error: Not Supported 00:15:08.824 Deallocated Read Value: Unknown 00:15:08.824 Deallocate in Write Zeroes: Not Supported 00:15:08.824 Deallocated Guard Field: 0xFFFF 00:15:08.824 Flush: Supported 00:15:08.824 Reservation: Supported 00:15:08.824 Namespace Sharing Capabilities: Multiple Controllers 00:15:08.824 Size (in LBAs): 131072 (0GiB) 00:15:08.824 Capacity (in LBAs): 131072 (0GiB) 00:15:08.824 Utilization (in LBAs): 131072 (0GiB) 00:15:08.824 NGUID: 971D8E3E8FE643AEA00D18B47793871A 00:15:08.824 UUID: 971d8e3e-8fe6-43ae-a00d-18b47793871a 00:15:08.824 Thin Provisioning: Not Supported 00:15:08.824 Per-NS Atomic Units: Yes 00:15:08.824 Atomic Boundary Size (Normal): 0 00:15:08.824 Atomic Boundary Size (PFail): 0 00:15:08.824 Atomic Boundary Offset: 0 00:15:08.824 Maximum Single Source Range Length: 65535 00:15:08.824 Maximum Copy Length: 65535 00:15:08.824 Maximum Source Range Count: 1 00:15:08.824 NGUID/EUI64 Never Reused: No 00:15:08.824 Namespace Write Protected: No 00:15:08.824 Number of LBA Formats: 1 00:15:08.824 Current LBA Format: LBA Format #00 00:15:08.824 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:08.824 00:15:08.824 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:09.082 [2024-11-26 19:16:31.937031] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.354 Initializing NVMe Controllers 00:15:14.354 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.354 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:14.354 Initialization complete. Launching workers. 00:15:14.354 ======================================================== 00:15:14.354 Latency(us) 00:15:14.354 Device Information : IOPS MiB/s Average min max 00:15:14.354 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39962.59 156.10 3203.37 948.57 9638.48 00:15:14.354 ======================================================== 00:15:14.354 Total : 39962.59 156.10 3203.37 948.57 9638.48 00:15:14.354 00:15:14.354 [2024-11-26 19:16:37.041925] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.355 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:14.355 [2024-11-26 19:16:37.274639] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.635 Initializing NVMe Controllers 00:15:19.635 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:19.635 Initialization complete. Launching workers. 00:15:19.635 ======================================================== 00:15:19.635 Latency(us) 00:15:19.635 Device Information : IOPS MiB/s Average min max 00:15:19.635 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39932.40 155.99 3205.81 968.73 10601.64 00:15:19.635 ======================================================== 00:15:19.635 Total : 39932.40 155.99 3205.81 968.73 10601.64 00:15:19.635 00:15:19.635 [2024-11-26 19:16:42.296220] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.635 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:19.635 [2024-11-26 19:16:42.500024] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.034 [2024-11-26 19:16:47.632760] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.034 Initializing NVMe Controllers 00:15:25.034 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:25.034 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:25.034 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:25.034 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:25.034 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:25.034 Initialization complete. Launching workers. 00:15:25.034 Starting thread on core 2 00:15:25.034 Starting thread on core 3 00:15:25.034 Starting thread on core 1 00:15:25.034 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:25.034 [2024-11-26 19:16:47.931241] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.319 [2024-11-26 19:16:51.023937] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.319 Initializing NVMe Controllers 00:15:28.319 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.319 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.319 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:28.319 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:28.319 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:28.319 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:28.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:28.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:28.319 Initialization complete. Launching workers. 00:15:28.319 Starting thread on core 1 with urgent priority queue 00:15:28.319 Starting thread on core 2 with urgent priority queue 00:15:28.319 Starting thread on core 3 with urgent priority queue 00:15:28.319 Starting thread on core 0 with urgent priority queue 00:15:28.319 SPDK bdev Controller (SPDK2 ) core 0: 2751.33 IO/s 36.35 secs/100000 ios 00:15:28.319 SPDK bdev Controller (SPDK2 ) core 1: 2249.67 IO/s 44.45 secs/100000 ios 00:15:28.319 SPDK bdev Controller (SPDK2 ) core 2: 2878.00 IO/s 34.75 secs/100000 ios 00:15:28.319 SPDK bdev Controller (SPDK2 ) core 3: 2186.67 IO/s 45.73 secs/100000 ios 00:15:28.319 ======================================================== 00:15:28.319 00:15:28.319 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:28.319 [2024-11-26 19:16:51.309128] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.319 Initializing NVMe Controllers 00:15:28.319 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.319 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.319 Namespace ID: 1 size: 0GB 00:15:28.319 Initialization complete. 00:15:28.319 INFO: using host memory buffer for IO 00:15:28.319 Hello world! 00:15:28.319 [2024-11-26 19:16:51.329251] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.319 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:28.577 [2024-11-26 19:16:51.606041] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.953 Initializing NVMe Controllers 00:15:29.953 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.953 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.953 Initialization complete. Launching workers. 00:15:29.953 submit (in ns) avg, min, max = 8144.0, 3134.3, 7988901.9 00:15:29.953 complete (in ns) avg, min, max = 19678.7, 1722.9, 3998204.8 00:15:29.953 00:15:29.953 Submit histogram 00:15:29.953 ================ 00:15:29.953 Range in us Cumulative Count 00:15:29.953 3.124 - 3.139: 0.0119% ( 2) 00:15:29.953 3.139 - 3.154: 0.0418% ( 5) 00:15:29.953 3.154 - 3.170: 0.0896% ( 8) 00:15:29.953 3.170 - 3.185: 0.1613% ( 12) 00:15:29.953 3.185 - 3.200: 0.6094% ( 75) 00:15:29.953 3.200 - 3.215: 2.4494% ( 308) 00:15:29.953 3.215 - 3.230: 6.0577% ( 604) 00:15:29.953 3.230 - 3.246: 9.9409% ( 650) 00:15:29.953 3.246 - 3.261: 14.8695% ( 825) 00:15:29.953 3.261 - 3.276: 21.3752% ( 1089) 00:15:29.953 3.276 - 3.291: 27.7615% ( 1069) 00:15:29.953 3.291 - 3.307: 33.7774% ( 1007) 00:15:29.953 3.307 - 3.322: 39.6141% ( 977) 00:15:29.953 3.322 - 3.337: 45.2058% ( 936) 00:15:29.953 3.337 - 3.352: 50.4690% ( 881) 00:15:29.953 3.352 - 3.368: 57.3332% ( 1149) 00:15:29.953 3.368 - 3.383: 64.2034% ( 1150) 00:15:29.953 3.383 - 3.398: 68.6122% ( 738) 00:15:29.953 3.398 - 3.413: 74.3473% ( 960) 00:15:29.953 3.413 - 3.429: 79.4014% ( 846) 00:15:29.953 3.429 - 3.444: 82.5318% ( 524) 00:15:29.953 3.444 - 3.459: 84.9394% ( 403) 00:15:29.953 3.459 - 3.474: 86.5046% ( 262) 00:15:29.953 3.474 - 3.490: 87.4664% ( 161) 00:15:29.953 3.490 - 3.505: 88.1415% ( 113) 00:15:29.953 3.505 - 3.520: 88.8225% ( 114) 00:15:29.953 3.520 - 3.535: 89.3482% ( 88) 00:15:29.953 3.535 - 3.550: 90.1308% ( 131) 00:15:29.953 3.550 - 3.566: 90.9732% ( 141) 00:15:29.953 3.566 - 3.581: 91.9111% ( 157) 00:15:29.953 3.581 - 3.596: 92.7535% ( 141) 00:15:29.953 3.596 - 3.611: 93.7451% ( 166) 00:15:29.953 3.611 - 3.627: 94.8205% ( 180) 00:15:29.953 3.627 - 3.642: 95.6807% ( 144) 00:15:29.953 3.642 - 3.657: 96.4813% ( 134) 00:15:29.953 3.657 - 3.672: 97.2460% ( 128) 00:15:29.953 3.672 - 3.688: 97.9987% ( 126) 00:15:29.953 3.688 - 3.703: 98.4587% ( 77) 00:15:29.953 3.703 - 3.718: 98.8590% ( 67) 00:15:29.953 3.718 - 3.733: 99.1517% ( 49) 00:15:29.953 3.733 - 3.749: 99.3548% ( 34) 00:15:29.953 3.749 - 3.764: 99.4384% ( 14) 00:15:29.953 3.764 - 3.779: 99.5101% ( 12) 00:15:29.953 3.779 - 3.794: 99.5878% ( 13) 00:15:29.953 3.794 - 3.810: 99.6296% ( 7) 00:15:29.953 3.810 - 3.825: 99.6595% ( 5) 00:15:29.953 3.825 - 3.840: 99.6714% ( 2) 00:15:29.953 4.968 - 4.998: 99.6774% ( 1) 00:15:29.953 4.998 - 5.029: 99.6893% ( 2) 00:15:29.953 5.272 - 5.303: 99.6953% ( 1) 00:15:29.953 5.303 - 5.333: 99.7013% ( 1) 00:15:29.953 5.333 - 5.364: 99.7132% ( 2) 00:15:29.953 5.364 - 5.394: 99.7192% ( 1) 00:15:29.953 5.394 - 5.425: 99.7312% ( 2) 00:15:29.953 5.455 - 5.486: 99.7431% ( 2) 00:15:29.953 5.486 - 5.516: 99.7491% ( 1) 00:15:29.953 5.516 - 5.547: 99.7551% ( 1) 00:15:29.953 5.547 - 5.577: 99.7610% ( 1) 00:15:29.953 5.638 - 5.669: 99.7670% ( 1) 00:15:29.953 5.669 - 5.699: 99.7730% ( 1) 00:15:29.953 5.699 - 5.730: 99.7790% ( 1) 00:15:29.953 5.730 - 5.760: 99.7909% ( 2) 00:15:29.953 5.760 - 5.790: 99.8029% ( 2) 00:15:29.953 5.790 - 5.821: 99.8088% ( 1) 00:15:29.953 5.882 - 5.912: 99.8148% ( 1) 00:15:29.953 5.912 - 5.943: 99.8268% ( 2) 00:15:29.953 6.034 - 6.065: 99.8327% ( 1) 00:15:29.953 6.187 - 6.217: 99.8387% ( 1) 00:15:29.953 6.248 - 6.278: 99.8447% ( 1) 00:15:29.953 6.278 - 6.309: 99.8506% ( 1) 00:15:29.953 6.339 - 6.370: 99.8566% ( 1) 00:15:29.953 6.491 - 6.522: 99.8626% ( 1) 00:15:29.953 6.522 - 6.552: 99.8686% ( 1) 00:15:29.953 6.827 - 6.857: 99.8745% ( 1) 00:15:29.953 7.497 - 7.528: 99.8805% ( 1) 00:15:29.953 8.229 - 8.290: 99.8865% ( 1) 00:15:29.953 3994.575 - 4025.783: 99.9940% ( 18) 00:15:29.953 7957.943 - 7989.150: 100.0000% ( 1) 00:15:29.953 00:15:29.953 Complete histogram 00:15:29.953 ================== 00:15:29.953 Ra[2024-11-26 19:16:52.700640] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.953 nge in us Cumulative Count 00:15:29.953 1.722 - 1.730: 0.0060% ( 1) 00:15:29.953 1.730 - 1.737: 0.0657% ( 10) 00:15:29.953 1.737 - 1.745: 0.5018% ( 73) 00:15:29.953 1.745 - 1.752: 0.9738% ( 79) 00:15:29.953 1.752 - 1.760: 1.0992% ( 21) 00:15:29.953 1.760 - 1.768: 1.1530% ( 9) 00:15:29.953 1.768 - 1.775: 1.1888% ( 6) 00:15:29.953 1.775 - 1.783: 1.7683% ( 97) 00:15:29.953 1.783 - 1.790: 10.5024% ( 1462) 00:15:29.953 1.790 - 1.798: 41.9918% ( 5271) 00:15:29.953 1.798 - 1.806: 69.4127% ( 4590) 00:15:29.953 1.806 - 1.813: 77.7765% ( 1400) 00:15:29.953 1.813 - 1.821: 80.4230% ( 443) 00:15:29.953 1.821 - 1.829: 82.2510% ( 306) 00:15:29.953 1.829 - 1.836: 84.1269% ( 314) 00:15:29.953 1.836 - 1.844: 87.7711% ( 610) 00:15:29.953 1.844 - 1.851: 91.8334% ( 680) 00:15:29.953 1.851 - 1.859: 94.6114% ( 465) 00:15:29.953 1.859 - 1.867: 96.0870% ( 247) 00:15:29.953 1.867 - 1.874: 97.1563% ( 179) 00:15:29.953 1.874 - 1.882: 97.9031% ( 125) 00:15:29.953 1.882 - 1.890: 98.4109% ( 85) 00:15:29.953 1.890 - 1.897: 98.7275% ( 53) 00:15:29.953 1.897 - 1.905: 98.8888% ( 27) 00:15:29.953 1.905 - 1.912: 99.0262% ( 23) 00:15:29.953 1.912 - 1.920: 99.1457% ( 20) 00:15:29.953 1.920 - 1.928: 99.2293% ( 14) 00:15:29.953 1.928 - 1.935: 99.3010% ( 12) 00:15:29.953 1.935 - 1.943: 99.3309% ( 5) 00:15:29.953 1.943 - 1.950: 99.3429% ( 2) 00:15:29.953 1.950 - 1.966: 99.3667% ( 4) 00:15:29.953 1.966 - 1.981: 99.3787% ( 2) 00:15:29.953 1.981 - 1.996: 99.3847% ( 1) 00:15:29.953 2.072 - 2.088: 99.3906% ( 1) 00:15:29.953 2.225 - 2.240: 99.3966% ( 1) 00:15:29.953 2.270 - 2.286: 99.4026% ( 1) 00:15:29.953 3.459 - 3.474: 99.4086% ( 1) 00:15:29.953 3.490 - 3.505: 99.4205% ( 2) 00:15:29.953 3.657 - 3.672: 99.4265% ( 1) 00:15:29.953 3.840 - 3.855: 99.4325% ( 1) 00:15:29.953 3.886 - 3.901: 99.4384% ( 1) 00:15:29.953 3.931 - 3.962: 99.4444% ( 1) 00:15:29.953 3.962 - 3.992: 99.4623% ( 3) 00:15:29.953 4.084 - 4.114: 99.4683% ( 1) 00:15:29.953 4.114 - 4.145: 99.4743% ( 1) 00:15:29.953 4.450 - 4.480: 99.4862% ( 2) 00:15:29.953 4.632 - 4.663: 99.4982% ( 2) 00:15:29.953 4.693 - 4.724: 99.5042% ( 1) 00:15:29.954 4.754 - 4.785: 99.5101% ( 1) 00:15:29.954 4.968 - 4.998: 99.5161% ( 1) 00:15:29.954 5.882 - 5.912: 99.5221% ( 1) 00:15:29.954 6.126 - 6.156: 99.5280% ( 1) 00:15:29.954 6.400 - 6.430: 99.5340% ( 1) 00:15:29.954 7.101 - 7.131: 99.5400% ( 1) 00:15:29.954 11.337 - 11.398: 99.5460% ( 1) 00:15:29.954 12.251 - 12.312: 99.5519% ( 1) 00:15:29.954 3526.461 - 3542.065: 99.5579% ( 1) 00:15:29.954 3978.971 - 3994.575: 99.5639% ( 1) 00:15:29.954 3994.575 - 4025.783: 100.0000% ( 73) 00:15:29.954 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:29.954 [ 00:15:29.954 { 00:15:29.954 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:29.954 "subtype": "Discovery", 00:15:29.954 "listen_addresses": [], 00:15:29.954 "allow_any_host": true, 00:15:29.954 "hosts": [] 00:15:29.954 }, 00:15:29.954 { 00:15:29.954 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:29.954 "subtype": "NVMe", 00:15:29.954 "listen_addresses": [ 00:15:29.954 { 00:15:29.954 "trtype": "VFIOUSER", 00:15:29.954 "adrfam": "IPv4", 00:15:29.954 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:29.954 "trsvcid": "0" 00:15:29.954 } 00:15:29.954 ], 00:15:29.954 "allow_any_host": true, 00:15:29.954 "hosts": [], 00:15:29.954 "serial_number": "SPDK1", 00:15:29.954 "model_number": "SPDK bdev Controller", 00:15:29.954 "max_namespaces": 32, 00:15:29.954 "min_cntlid": 1, 00:15:29.954 "max_cntlid": 65519, 00:15:29.954 "namespaces": [ 00:15:29.954 { 00:15:29.954 "nsid": 1, 00:15:29.954 "bdev_name": "Malloc1", 00:15:29.954 "name": "Malloc1", 00:15:29.954 "nguid": "A117B88A09D74ABA8F0511543D42D73F", 00:15:29.954 "uuid": "a117b88a-09d7-4aba-8f05-11543d42d73f" 00:15:29.954 }, 00:15:29.954 { 00:15:29.954 "nsid": 2, 00:15:29.954 "bdev_name": "Malloc3", 00:15:29.954 "name": "Malloc3", 00:15:29.954 "nguid": "7688712EF268451097299DF31BC248EA", 00:15:29.954 "uuid": "7688712e-f268-4510-9729-9df31bc248ea" 00:15:29.954 } 00:15:29.954 ] 00:15:29.954 }, 00:15:29.954 { 00:15:29.954 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:29.954 "subtype": "NVMe", 00:15:29.954 "listen_addresses": [ 00:15:29.954 { 00:15:29.954 "trtype": "VFIOUSER", 00:15:29.954 "adrfam": "IPv4", 00:15:29.954 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:29.954 "trsvcid": "0" 00:15:29.954 } 00:15:29.954 ], 00:15:29.954 "allow_any_host": true, 00:15:29.954 "hosts": [], 00:15:29.954 "serial_number": "SPDK2", 00:15:29.954 "model_number": "SPDK bdev Controller", 00:15:29.954 "max_namespaces": 32, 00:15:29.954 "min_cntlid": 1, 00:15:29.954 "max_cntlid": 65519, 00:15:29.954 "namespaces": [ 00:15:29.954 { 00:15:29.954 "nsid": 1, 00:15:29.954 "bdev_name": "Malloc2", 00:15:29.954 "name": "Malloc2", 00:15:29.954 "nguid": "971D8E3E8FE643AEA00D18B47793871A", 00:15:29.954 "uuid": "971d8e3e-8fe6-43ae-a00d-18b47793871a" 00:15:29.954 } 00:15:29.954 ] 00:15:29.954 } 00:15:29.954 ] 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3713162 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:29.954 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:30.213 [2024-11-26 19:16:53.112751] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.213 Malloc4 00:15:30.213 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:30.469 [2024-11-26 19:16:53.364553] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.470 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:30.470 Asynchronous Event Request test 00:15:30.470 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.470 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.470 Registering asynchronous event callbacks... 00:15:30.470 Starting namespace attribute notice tests for all controllers... 00:15:30.470 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:30.470 aer_cb - Changed Namespace 00:15:30.470 Cleaning up... 00:15:30.470 [ 00:15:30.470 { 00:15:30.470 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:30.470 "subtype": "Discovery", 00:15:30.470 "listen_addresses": [], 00:15:30.470 "allow_any_host": true, 00:15:30.470 "hosts": [] 00:15:30.470 }, 00:15:30.470 { 00:15:30.470 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:30.470 "subtype": "NVMe", 00:15:30.470 "listen_addresses": [ 00:15:30.470 { 00:15:30.470 "trtype": "VFIOUSER", 00:15:30.470 "adrfam": "IPv4", 00:15:30.470 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:30.470 "trsvcid": "0" 00:15:30.470 } 00:15:30.470 ], 00:15:30.470 "allow_any_host": true, 00:15:30.470 "hosts": [], 00:15:30.470 "serial_number": "SPDK1", 00:15:30.470 "model_number": "SPDK bdev Controller", 00:15:30.470 "max_namespaces": 32, 00:15:30.470 "min_cntlid": 1, 00:15:30.470 "max_cntlid": 65519, 00:15:30.470 "namespaces": [ 00:15:30.470 { 00:15:30.470 "nsid": 1, 00:15:30.470 "bdev_name": "Malloc1", 00:15:30.470 "name": "Malloc1", 00:15:30.470 "nguid": "A117B88A09D74ABA8F0511543D42D73F", 00:15:30.470 "uuid": "a117b88a-09d7-4aba-8f05-11543d42d73f" 00:15:30.470 }, 00:15:30.470 { 00:15:30.470 "nsid": 2, 00:15:30.470 "bdev_name": "Malloc3", 00:15:30.470 "name": "Malloc3", 00:15:30.470 "nguid": "7688712EF268451097299DF31BC248EA", 00:15:30.470 "uuid": "7688712e-f268-4510-9729-9df31bc248ea" 00:15:30.470 } 00:15:30.470 ] 00:15:30.470 }, 00:15:30.470 { 00:15:30.470 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:30.470 "subtype": "NVMe", 00:15:30.470 "listen_addresses": [ 00:15:30.470 { 00:15:30.470 "trtype": "VFIOUSER", 00:15:30.470 "adrfam": "IPv4", 00:15:30.470 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:30.470 "trsvcid": "0" 00:15:30.470 } 00:15:30.470 ], 00:15:30.470 "allow_any_host": true, 00:15:30.470 "hosts": [], 00:15:30.470 "serial_number": "SPDK2", 00:15:30.470 "model_number": "SPDK bdev Controller", 00:15:30.470 "max_namespaces": 32, 00:15:30.470 "min_cntlid": 1, 00:15:30.470 "max_cntlid": 65519, 00:15:30.470 "namespaces": [ 00:15:30.470 { 00:15:30.470 "nsid": 1, 00:15:30.470 "bdev_name": "Malloc2", 00:15:30.470 "name": "Malloc2", 00:15:30.470 "nguid": "971D8E3E8FE643AEA00D18B47793871A", 00:15:30.470 "uuid": "971d8e3e-8fe6-43ae-a00d-18b47793871a" 00:15:30.470 }, 00:15:30.470 { 00:15:30.470 "nsid": 2, 00:15:30.470 "bdev_name": "Malloc4", 00:15:30.470 "name": "Malloc4", 00:15:30.470 "nguid": "F649CB6A60F2496C982747EA8DD70AB0", 00:15:30.470 "uuid": "f649cb6a-60f2-496c-9827-47ea8dd70ab0" 00:15:30.470 } 00:15:30.470 ] 00:15:30.470 } 00:15:30.470 ] 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3713162 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3705593 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3705593 ']' 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3705593 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3705593 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3705593' 00:15:30.728 killing process with pid 3705593 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3705593 00:15:30.728 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3705593 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3713264 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3713264' 00:15:30.987 Process pid: 3713264 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3713264 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3713264 ']' 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.987 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:30.987 [2024-11-26 19:16:53.920820] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:30.987 [2024-11-26 19:16:53.921683] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:15:30.987 [2024-11-26 19:16:53.921721] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.987 [2024-11-26 19:16:53.995450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.987 [2024-11-26 19:16:54.032345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.987 [2024-11-26 19:16:54.032382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.987 [2024-11-26 19:16:54.032389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.987 [2024-11-26 19:16:54.032396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.987 [2024-11-26 19:16:54.032400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.987 [2024-11-26 19:16:54.033957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.987 [2024-11-26 19:16:54.034065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.987 [2024-11-26 19:16:54.034173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.987 [2024-11-26 19:16:54.034174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.246 [2024-11-26 19:16:54.102894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:31.246 [2024-11-26 19:16:54.103651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:31.246 [2024-11-26 19:16:54.104026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:31.246 [2024-11-26 19:16:54.104727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:31.246 [2024-11-26 19:16:54.104756] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:31.246 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.246 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:31.246 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:32.182 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:32.441 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:32.441 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:32.441 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:32.441 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:32.441 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:32.441 Malloc1 00:15:32.700 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:32.700 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:32.959 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:33.218 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.218 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:33.218 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:33.476 Malloc2 00:15:33.476 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:33.476 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:33.734 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:33.993 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:33.993 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3713264 00:15:33.993 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3713264 ']' 00:15:33.993 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3713264 00:15:33.993 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:33.993 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.993 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3713264 00:15:33.993 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.993 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.993 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3713264' 00:15:33.993 killing process with pid 3713264 00:15:33.993 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3713264 00:15:33.993 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3713264 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:34.253 00:15:34.253 real 0m50.803s 00:15:34.253 user 3m16.529s 00:15:34.253 sys 0m3.216s 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:34.253 ************************************ 00:15:34.253 END TEST nvmf_vfio_user 00:15:34.253 ************************************ 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.253 ************************************ 00:15:34.253 START TEST nvmf_vfio_user_nvme_compliance 00:15:34.253 ************************************ 00:15:34.253 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:34.512 * Looking for test storage... 00:15:34.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:34.512 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:34.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.513 --rc genhtml_branch_coverage=1 00:15:34.513 --rc genhtml_function_coverage=1 00:15:34.513 --rc genhtml_legend=1 00:15:34.513 --rc geninfo_all_blocks=1 00:15:34.513 --rc geninfo_unexecuted_blocks=1 00:15:34.513 00:15:34.513 ' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:34.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.513 --rc genhtml_branch_coverage=1 00:15:34.513 --rc genhtml_function_coverage=1 00:15:34.513 --rc genhtml_legend=1 00:15:34.513 --rc geninfo_all_blocks=1 00:15:34.513 --rc geninfo_unexecuted_blocks=1 00:15:34.513 00:15:34.513 ' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:34.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.513 --rc genhtml_branch_coverage=1 00:15:34.513 --rc genhtml_function_coverage=1 00:15:34.513 --rc genhtml_legend=1 00:15:34.513 --rc geninfo_all_blocks=1 00:15:34.513 --rc geninfo_unexecuted_blocks=1 00:15:34.513 00:15:34.513 ' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:34.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.513 --rc genhtml_branch_coverage=1 00:15:34.513 --rc genhtml_function_coverage=1 00:15:34.513 --rc genhtml_legend=1 00:15:34.513 --rc geninfo_all_blocks=1 00:15:34.513 --rc geninfo_unexecuted_blocks=1 00:15:34.513 00:15:34.513 ' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3714021 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3714021' 00:15:34.513 Process pid: 3714021 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3714021 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3714021 ']' 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.513 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:34.514 [2024-11-26 19:16:57.551342] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:15:34.514 [2024-11-26 19:16:57.551388] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.772 [2024-11-26 19:16:57.627238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.772 [2024-11-26 19:16:57.666193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.772 [2024-11-26 19:16:57.666232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.772 [2024-11-26 19:16:57.666239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.772 [2024-11-26 19:16:57.666244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.772 [2024-11-26 19:16:57.666249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.772 [2024-11-26 19:16:57.667735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.772 [2024-11-26 19:16:57.667771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.772 [2024-11-26 19:16:57.667773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.772 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.772 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:34.772 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.709 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.968 malloc0 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.968 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:35.968 00:15:35.968 00:15:35.968 CUnit - A unit testing framework for C - Version 2.1-3 00:15:35.968 http://cunit.sourceforge.net/ 00:15:35.968 00:15:35.968 00:15:35.968 Suite: nvme_compliance 00:15:35.968 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 19:16:59.021177] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.968 [2024-11-26 19:16:59.022526] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:35.968 [2024-11-26 19:16:59.022541] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:35.968 [2024-11-26 19:16:59.022547] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:35.968 [2024-11-26 19:16:59.024195] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.968 passed 00:15:36.228 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 19:16:59.099714] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.228 [2024-11-26 19:16:59.102736] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.228 passed 00:15:36.228 Test: admin_identify_ns ...[2024-11-26 19:16:59.182030] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.228 [2024-11-26 19:16:59.241681] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:36.228 [2024-11-26 19:16:59.249680] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:36.228 [2024-11-26 19:16:59.270769] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.228 passed 00:15:36.486 Test: admin_get_features_mandatory_features ...[2024-11-26 19:16:59.344575] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.486 [2024-11-26 19:16:59.347598] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.486 passed 00:15:36.486 Test: admin_get_features_optional_features ...[2024-11-26 19:16:59.424100] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.486 [2024-11-26 19:16:59.429125] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.486 passed 00:15:36.486 Test: admin_set_features_number_of_queues ...[2024-11-26 19:16:59.503847] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.747 [2024-11-26 19:16:59.612760] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.747 passed 00:15:36.747 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 19:16:59.685331] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.747 [2024-11-26 19:16:59.688349] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.747 passed 00:15:36.747 Test: admin_get_log_page_with_lpo ...[2024-11-26 19:16:59.765984] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.747 [2024-11-26 19:16:59.834686] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:36.747 [2024-11-26 19:16:59.847749] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.005 passed 00:15:37.005 Test: fabric_property_get ...[2024-11-26 19:16:59.921537] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.005 [2024-11-26 19:16:59.922769] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:37.005 [2024-11-26 19:16:59.926570] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.005 passed 00:15:37.005 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 19:17:00.003075] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.005 [2024-11-26 19:17:00.004306] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:37.005 [2024-11-26 19:17:00.007128] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.005 passed 00:15:37.005 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 19:17:00.087392] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.264 [2024-11-26 19:17:00.171676] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:37.264 [2024-11-26 19:17:00.187680] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:37.264 [2024-11-26 19:17:00.192850] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.264 passed 00:15:37.264 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 19:17:00.267819] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.264 [2024-11-26 19:17:00.269052] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:37.264 [2024-11-26 19:17:00.271845] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.264 passed 00:15:37.264 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 19:17:00.349640] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.522 [2024-11-26 19:17:00.425681] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:37.522 [2024-11-26 19:17:00.449677] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:37.522 [2024-11-26 19:17:00.454753] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.522 passed 00:15:37.522 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 19:17:00.527600] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.522 [2024-11-26 19:17:00.528837] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:37.522 [2024-11-26 19:17:00.528862] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:37.522 [2024-11-26 19:17:00.530625] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.522 passed 00:15:37.522 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 19:17:00.608420] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.781 [2024-11-26 19:17:00.700685] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:37.781 [2024-11-26 19:17:00.708678] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:37.781 [2024-11-26 19:17:00.716676] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:37.781 [2024-11-26 19:17:00.724676] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:37.781 [2024-11-26 19:17:00.753762] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.781 passed 00:15:37.781 Test: admin_create_io_sq_verify_pc ...[2024-11-26 19:17:00.827288] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.781 [2024-11-26 19:17:00.842684] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:37.781 [2024-11-26 19:17:00.860446] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.781 passed 00:15:38.040 Test: admin_create_io_qp_max_qps ...[2024-11-26 19:17:00.936950] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.976 [2024-11-26 19:17:02.032680] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:39.541 [2024-11-26 19:17:02.411154] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.541 passed 00:15:39.541 Test: admin_create_io_sq_shared_cq ...[2024-11-26 19:17:02.485960] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.541 [2024-11-26 19:17:02.617688] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:39.799 [2024-11-26 19:17:02.654741] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.799 passed 00:15:39.799 00:15:39.799 Run Summary: Type Total Ran Passed Failed Inactive 00:15:39.799 suites 1 1 n/a 0 0 00:15:39.799 tests 18 18 18 0 0 00:15:39.799 asserts 360 360 360 0 n/a 00:15:39.799 00:15:39.799 Elapsed time = 1.492 seconds 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3714021 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3714021 ']' 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3714021 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3714021 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3714021' 00:15:39.799 killing process with pid 3714021 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3714021 00:15:39.799 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3714021 00:15:40.057 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:40.057 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:40.057 00:15:40.057 real 0m5.647s 00:15:40.057 user 0m15.790s 00:15:40.057 sys 0m0.520s 00:15:40.057 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.057 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.057 ************************************ 00:15:40.057 END TEST nvmf_vfio_user_nvme_compliance 00:15:40.057 ************************************ 00:15:40.057 19:17:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:40.057 19:17:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.057 19:17:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.057 19:17:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.057 ************************************ 00:15:40.057 START TEST nvmf_vfio_user_fuzz 00:15:40.057 ************************************ 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:40.057 * Looking for test storage... 00:15:40.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:40.057 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:40.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.315 --rc genhtml_branch_coverage=1 00:15:40.315 --rc genhtml_function_coverage=1 00:15:40.315 --rc genhtml_legend=1 00:15:40.315 --rc geninfo_all_blocks=1 00:15:40.315 --rc geninfo_unexecuted_blocks=1 00:15:40.315 00:15:40.315 ' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:40.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.315 --rc genhtml_branch_coverage=1 00:15:40.315 --rc genhtml_function_coverage=1 00:15:40.315 --rc genhtml_legend=1 00:15:40.315 --rc geninfo_all_blocks=1 00:15:40.315 --rc geninfo_unexecuted_blocks=1 00:15:40.315 00:15:40.315 ' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:40.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.315 --rc genhtml_branch_coverage=1 00:15:40.315 --rc genhtml_function_coverage=1 00:15:40.315 --rc genhtml_legend=1 00:15:40.315 --rc geninfo_all_blocks=1 00:15:40.315 --rc geninfo_unexecuted_blocks=1 00:15:40.315 00:15:40.315 ' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:40.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.315 --rc genhtml_branch_coverage=1 00:15:40.315 --rc genhtml_function_coverage=1 00:15:40.315 --rc genhtml_legend=1 00:15:40.315 --rc geninfo_all_blocks=1 00:15:40.315 --rc geninfo_unexecuted_blocks=1 00:15:40.315 00:15:40.315 ' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3715011 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3715011' 00:15:40.315 Process pid: 3715011 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3715011 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3715011 ']' 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.315 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.573 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.573 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:40.573 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.507 malloc0 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:41.507 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:13.579 Fuzzing completed. Shutting down the fuzz application 00:16:13.579 00:16:13.579 Dumping successful admin opcodes: 00:16:13.579 9, 10, 00:16:13.579 Dumping successful io opcodes: 00:16:13.579 0, 00:16:13.579 NS: 0x20000081ef00 I/O qp, Total commands completed: 1125948, total successful commands: 4433, random_seed: 534403392 00:16:13.579 NS: 0x20000081ef00 admin qp, Total commands completed: 277872, total successful commands: 64, random_seed: 410269632 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3715011 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3715011 ']' 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3715011 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3715011 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3715011' 00:16:13.579 killing process with pid 3715011 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3715011 00:16:13.579 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3715011 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:13.579 00:16:13.579 real 0m32.218s 00:16:13.579 user 0m33.622s 00:16:13.579 sys 0m27.065s 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.579 ************************************ 00:16:13.579 END TEST nvmf_vfio_user_fuzz 00:16:13.579 ************************************ 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.579 ************************************ 00:16:13.579 START TEST nvmf_auth_target 00:16:13.579 ************************************ 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:13.579 * Looking for test storage... 00:16:13.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.579 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.579 --rc genhtml_branch_coverage=1 00:16:13.579 --rc genhtml_function_coverage=1 00:16:13.579 --rc genhtml_legend=1 00:16:13.579 --rc geninfo_all_blocks=1 00:16:13.580 --rc geninfo_unexecuted_blocks=1 00:16:13.580 00:16:13.580 ' 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.580 --rc genhtml_branch_coverage=1 00:16:13.580 --rc genhtml_function_coverage=1 00:16:13.580 --rc genhtml_legend=1 00:16:13.580 --rc geninfo_all_blocks=1 00:16:13.580 --rc geninfo_unexecuted_blocks=1 00:16:13.580 00:16:13.580 ' 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.580 --rc genhtml_branch_coverage=1 00:16:13.580 --rc genhtml_function_coverage=1 00:16:13.580 --rc genhtml_legend=1 00:16:13.580 --rc geninfo_all_blocks=1 00:16:13.580 --rc geninfo_unexecuted_blocks=1 00:16:13.580 00:16:13.580 ' 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.580 --rc genhtml_branch_coverage=1 00:16:13.580 --rc genhtml_function_coverage=1 00:16:13.580 --rc genhtml_legend=1 00:16:13.580 --rc geninfo_all_blocks=1 00:16:13.580 --rc geninfo_unexecuted_blocks=1 00:16:13.580 00:16:13.580 ' 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:13.580 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:18.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:18.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:18.851 Found net devices under 0000:86:00.0: cvl_0_0 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:18.851 Found net devices under 0000:86:00.1: cvl_0_1 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:18.851 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:18.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:16:18.852 00:16:18.852 --- 10.0.0.2 ping statistics --- 00:16:18.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.852 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:16:18.852 00:16:18.852 --- 10.0.0.1 ping statistics --- 00:16:18.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.852 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3723312 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3723312 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3723312 ']' 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3723333 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=711d3d39910f5f974155c336d2d82493fe129c33a5858b18 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.D6T 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 711d3d39910f5f974155c336d2d82493fe129c33a5858b18 0 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 711d3d39910f5f974155c336d2d82493fe129c33a5858b18 0 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=711d3d39910f5f974155c336d2d82493fe129c33a5858b18 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.D6T 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.D6T 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.D6T 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3144d9e1b97788739d3dded8f7d256d8cc03561bb1508e792a7f586329ef8069 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QdT 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3144d9e1b97788739d3dded8f7d256d8cc03561bb1508e792a7f586329ef8069 3 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3144d9e1b97788739d3dded8f7d256d8cc03561bb1508e792a7f586329ef8069 3 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3144d9e1b97788739d3dded8f7d256d8cc03561bb1508e792a7f586329ef8069 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QdT 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QdT 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.QdT 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=28ee0fcc80a2cfb72dd62c17ba9e1762 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.G8p 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 28ee0fcc80a2cfb72dd62c17ba9e1762 1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 28ee0fcc80a2cfb72dd62c17ba9e1762 1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=28ee0fcc80a2cfb72dd62c17ba9e1762 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.G8p 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.G8p 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.G8p 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4b36a08a1e0311e73a1448f4dbb47acd340c155f8458ca7b 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Egz 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4b36a08a1e0311e73a1448f4dbb47acd340c155f8458ca7b 2 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4b36a08a1e0311e73a1448f4dbb47acd340c155f8458ca7b 2 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4b36a08a1e0311e73a1448f4dbb47acd340c155f8458ca7b 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:18.852 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Egz 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Egz 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Egz 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7b51823085f858fae2154a925cef9766bbddd6ed728933b1 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AlH 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7b51823085f858fae2154a925cef9766bbddd6ed728933b1 2 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7b51823085f858fae2154a925cef9766bbddd6ed728933b1 2 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7b51823085f858fae2154a925cef9766bbddd6ed728933b1 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:19.111 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AlH 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AlH 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.AlH 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e8e630ec6a424daa34b748564f4ed024 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JKB 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e8e630ec6a424daa34b748564f4ed024 1 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e8e630ec6a424daa34b748564f4ed024 1 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e8e630ec6a424daa34b748564f4ed024 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JKB 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JKB 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.JKB 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1aa41cf3b15363d4fa8c7c6895b071c9df9ac9206c34a805b200ec9d4d1e1104 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2D2 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1aa41cf3b15363d4fa8c7c6895b071c9df9ac9206c34a805b200ec9d4d1e1104 3 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1aa41cf3b15363d4fa8c7c6895b071c9df9ac9206c34a805b200ec9d4d1e1104 3 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1aa41cf3b15363d4fa8c7c6895b071c9df9ac9206c34a805b200ec9d4d1e1104 00:16:19.111 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2D2 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2D2 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.2D2 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3723312 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3723312 ']' 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.112 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3723333 /var/tmp/host.sock 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3723333 ']' 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:19.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.370 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.627 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.627 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:19.627 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:19.627 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.627 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.628 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.628 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:19.628 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.D6T 00:16:19.628 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.628 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.628 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.628 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.D6T 00:16:19.628 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.D6T 00:16:19.885 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.QdT ]] 00:16:19.885 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QdT 00:16:19.885 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.885 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.885 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.885 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QdT 00:16:19.885 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QdT 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.G8p 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.G8p 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.G8p 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Egz ]] 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Egz 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Egz 00:16:20.143 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Egz 00:16:20.402 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:20.402 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AlH 00:16:20.402 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.402 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.402 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.402 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AlH 00:16:20.402 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AlH 00:16:20.661 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.JKB ]] 00:16:20.661 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JKB 00:16:20.661 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.661 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.661 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.661 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JKB 00:16:20.661 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JKB 00:16:20.920 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:20.920 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2D2 00:16:20.920 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.920 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.920 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.920 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2D2 00:16:20.920 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2D2 00:16:20.920 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:20.920 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:20.920 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.920 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.920 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.920 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.179 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.437 00:16:21.438 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.438 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.438 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.696 { 00:16:21.696 "cntlid": 1, 00:16:21.696 "qid": 0, 00:16:21.696 "state": "enabled", 00:16:21.696 "thread": "nvmf_tgt_poll_group_000", 00:16:21.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:21.696 "listen_address": { 00:16:21.696 "trtype": "TCP", 00:16:21.696 "adrfam": "IPv4", 00:16:21.696 "traddr": "10.0.0.2", 00:16:21.696 "trsvcid": "4420" 00:16:21.696 }, 00:16:21.696 "peer_address": { 00:16:21.696 "trtype": "TCP", 00:16:21.696 "adrfam": "IPv4", 00:16:21.696 "traddr": "10.0.0.1", 00:16:21.696 "trsvcid": "53578" 00:16:21.696 }, 00:16:21.696 "auth": { 00:16:21.696 "state": "completed", 00:16:21.696 "digest": "sha256", 00:16:21.696 "dhgroup": "null" 00:16:21.696 } 00:16:21.696 } 00:16:21.696 ]' 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.696 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.955 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:21.955 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:22.523 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.523 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.523 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.523 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.523 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.523 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.523 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.523 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.782 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.041 00:16:23.041 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.041 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.041 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.299 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.299 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.299 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.299 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.299 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.299 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.299 { 00:16:23.299 "cntlid": 3, 00:16:23.299 "qid": 0, 00:16:23.299 "state": "enabled", 00:16:23.299 "thread": "nvmf_tgt_poll_group_000", 00:16:23.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.299 "listen_address": { 00:16:23.299 "trtype": "TCP", 00:16:23.299 "adrfam": "IPv4", 00:16:23.299 "traddr": "10.0.0.2", 00:16:23.299 "trsvcid": "4420" 00:16:23.299 }, 00:16:23.300 "peer_address": { 00:16:23.300 "trtype": "TCP", 00:16:23.300 "adrfam": "IPv4", 00:16:23.300 "traddr": "10.0.0.1", 00:16:23.300 "trsvcid": "39806" 00:16:23.300 }, 00:16:23.300 "auth": { 00:16:23.300 "state": "completed", 00:16:23.300 "digest": "sha256", 00:16:23.300 "dhgroup": "null" 00:16:23.300 } 00:16:23.300 } 00:16:23.300 ]' 00:16:23.300 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.300 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.300 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.300 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.300 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.300 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.300 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.300 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.559 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:23.559 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:24.127 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.127 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.127 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.127 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.127 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.127 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.127 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.127 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.385 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.643 00:16:24.643 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.643 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.643 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.901 { 00:16:24.901 "cntlid": 5, 00:16:24.901 "qid": 0, 00:16:24.901 "state": "enabled", 00:16:24.901 "thread": "nvmf_tgt_poll_group_000", 00:16:24.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:24.901 "listen_address": { 00:16:24.901 "trtype": "TCP", 00:16:24.901 "adrfam": "IPv4", 00:16:24.901 "traddr": "10.0.0.2", 00:16:24.901 "trsvcid": "4420" 00:16:24.901 }, 00:16:24.901 "peer_address": { 00:16:24.901 "trtype": "TCP", 00:16:24.901 "adrfam": "IPv4", 00:16:24.901 "traddr": "10.0.0.1", 00:16:24.901 "trsvcid": "39840" 00:16:24.901 }, 00:16:24.901 "auth": { 00:16:24.901 "state": "completed", 00:16:24.901 "digest": "sha256", 00:16:24.901 "dhgroup": "null" 00:16:24.901 } 00:16:24.901 } 00:16:24.901 ]' 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.901 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.160 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:25.160 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:25.728 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.728 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.728 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.728 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.728 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.728 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.728 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.728 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.986 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:25.986 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.986 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.986 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.986 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.987 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.987 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:25.987 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.987 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.987 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.987 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.987 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.987 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.245 00:16:26.245 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.245 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.245 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.505 { 00:16:26.505 "cntlid": 7, 00:16:26.505 "qid": 0, 00:16:26.505 "state": "enabled", 00:16:26.505 "thread": "nvmf_tgt_poll_group_000", 00:16:26.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.505 "listen_address": { 00:16:26.505 "trtype": "TCP", 00:16:26.505 "adrfam": "IPv4", 00:16:26.505 "traddr": "10.0.0.2", 00:16:26.505 "trsvcid": "4420" 00:16:26.505 }, 00:16:26.505 "peer_address": { 00:16:26.505 "trtype": "TCP", 00:16:26.505 "adrfam": "IPv4", 00:16:26.505 "traddr": "10.0.0.1", 00:16:26.505 "trsvcid": "39864" 00:16:26.505 }, 00:16:26.505 "auth": { 00:16:26.505 "state": "completed", 00:16:26.505 "digest": "sha256", 00:16:26.505 "dhgroup": "null" 00:16:26.505 } 00:16:26.505 } 00:16:26.505 ]' 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.505 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.765 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:26.765 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.332 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.591 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.849 00:16:27.849 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.849 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.849 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.108 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.108 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.108 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.108 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.108 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.108 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.108 { 00:16:28.108 "cntlid": 9, 00:16:28.108 "qid": 0, 00:16:28.108 "state": "enabled", 00:16:28.108 "thread": "nvmf_tgt_poll_group_000", 00:16:28.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.108 "listen_address": { 00:16:28.108 "trtype": "TCP", 00:16:28.108 "adrfam": "IPv4", 00:16:28.108 "traddr": "10.0.0.2", 00:16:28.108 "trsvcid": "4420" 00:16:28.108 }, 00:16:28.108 "peer_address": { 00:16:28.108 "trtype": "TCP", 00:16:28.108 "adrfam": "IPv4", 00:16:28.108 "traddr": "10.0.0.1", 00:16:28.108 "trsvcid": "39882" 00:16:28.108 }, 00:16:28.108 "auth": { 00:16:28.108 "state": "completed", 00:16:28.108 "digest": "sha256", 00:16:28.108 "dhgroup": "ffdhe2048" 00:16:28.108 } 00:16:28.108 } 00:16:28.108 ]' 00:16:28.108 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.108 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.108 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.108 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.108 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.108 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.108 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.108 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.368 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:28.368 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:28.936 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.936 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:28.936 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.936 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.936 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.936 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.936 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.936 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.194 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.194 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.453 { 00:16:29.453 "cntlid": 11, 00:16:29.453 "qid": 0, 00:16:29.453 "state": "enabled", 00:16:29.453 "thread": "nvmf_tgt_poll_group_000", 00:16:29.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:29.453 "listen_address": { 00:16:29.453 "trtype": "TCP", 00:16:29.453 "adrfam": "IPv4", 00:16:29.453 "traddr": "10.0.0.2", 00:16:29.453 "trsvcid": "4420" 00:16:29.453 }, 00:16:29.453 "peer_address": { 00:16:29.453 "trtype": "TCP", 00:16:29.453 "adrfam": "IPv4", 00:16:29.453 "traddr": "10.0.0.1", 00:16:29.453 "trsvcid": "39916" 00:16:29.453 }, 00:16:29.453 "auth": { 00:16:29.453 "state": "completed", 00:16:29.453 "digest": "sha256", 00:16:29.453 "dhgroup": "ffdhe2048" 00:16:29.453 } 00:16:29.453 } 00:16:29.453 ]' 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.453 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.712 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.712 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.712 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.712 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.712 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.970 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:29.970 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.537 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.796 00:16:30.796 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.796 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.796 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.054 { 00:16:31.054 "cntlid": 13, 00:16:31.054 "qid": 0, 00:16:31.054 "state": "enabled", 00:16:31.054 "thread": "nvmf_tgt_poll_group_000", 00:16:31.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.054 "listen_address": { 00:16:31.054 "trtype": "TCP", 00:16:31.054 "adrfam": "IPv4", 00:16:31.054 "traddr": "10.0.0.2", 00:16:31.054 "trsvcid": "4420" 00:16:31.054 }, 00:16:31.054 "peer_address": { 00:16:31.054 "trtype": "TCP", 00:16:31.054 "adrfam": "IPv4", 00:16:31.054 "traddr": "10.0.0.1", 00:16:31.054 "trsvcid": "39940" 00:16:31.054 }, 00:16:31.054 "auth": { 00:16:31.054 "state": "completed", 00:16:31.054 "digest": "sha256", 00:16:31.054 "dhgroup": "ffdhe2048" 00:16:31.054 } 00:16:31.054 } 00:16:31.054 ]' 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.054 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.312 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.312 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.312 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.312 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:31.312 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:31.878 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.878 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.878 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.879 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.879 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.879 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.879 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.879 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.137 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.396 00:16:32.396 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.396 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.396 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.655 { 00:16:32.655 "cntlid": 15, 00:16:32.655 "qid": 0, 00:16:32.655 "state": "enabled", 00:16:32.655 "thread": "nvmf_tgt_poll_group_000", 00:16:32.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:32.655 "listen_address": { 00:16:32.655 "trtype": "TCP", 00:16:32.655 "adrfam": "IPv4", 00:16:32.655 "traddr": "10.0.0.2", 00:16:32.655 "trsvcid": "4420" 00:16:32.655 }, 00:16:32.655 "peer_address": { 00:16:32.655 "trtype": "TCP", 00:16:32.655 "adrfam": "IPv4", 00:16:32.655 "traddr": "10.0.0.1", 00:16:32.655 "trsvcid": "39968" 00:16:32.655 }, 00:16:32.655 "auth": { 00:16:32.655 "state": "completed", 00:16:32.655 "digest": "sha256", 00:16:32.655 "dhgroup": "ffdhe2048" 00:16:32.655 } 00:16:32.655 } 00:16:32.655 ]' 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.655 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.914 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:32.914 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.482 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.741 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.999 00:16:33.999 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.999 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.999 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.258 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.258 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.258 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.258 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.258 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.258 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.258 { 00:16:34.258 "cntlid": 17, 00:16:34.258 "qid": 0, 00:16:34.258 "state": "enabled", 00:16:34.258 "thread": "nvmf_tgt_poll_group_000", 00:16:34.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.258 "listen_address": { 00:16:34.258 "trtype": "TCP", 00:16:34.258 "adrfam": "IPv4", 00:16:34.258 "traddr": "10.0.0.2", 00:16:34.258 "trsvcid": "4420" 00:16:34.258 }, 00:16:34.258 "peer_address": { 00:16:34.258 "trtype": "TCP", 00:16:34.258 "adrfam": "IPv4", 00:16:34.258 "traddr": "10.0.0.1", 00:16:34.258 "trsvcid": "47176" 00:16:34.258 }, 00:16:34.258 "auth": { 00:16:34.258 "state": "completed", 00:16:34.258 "digest": "sha256", 00:16:34.258 "dhgroup": "ffdhe3072" 00:16:34.258 } 00:16:34.259 } 00:16:34.259 ]' 00:16:34.259 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.259 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.259 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.259 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.259 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.259 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.259 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.259 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.517 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:34.517 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:35.085 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.085 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.085 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.085 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.085 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.085 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.085 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.085 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.343 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.602 00:16:35.602 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.602 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.602 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.860 { 00:16:35.860 "cntlid": 19, 00:16:35.860 "qid": 0, 00:16:35.860 "state": "enabled", 00:16:35.860 "thread": "nvmf_tgt_poll_group_000", 00:16:35.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:35.860 "listen_address": { 00:16:35.860 "trtype": "TCP", 00:16:35.860 "adrfam": "IPv4", 00:16:35.860 "traddr": "10.0.0.2", 00:16:35.860 "trsvcid": "4420" 00:16:35.860 }, 00:16:35.860 "peer_address": { 00:16:35.860 "trtype": "TCP", 00:16:35.860 "adrfam": "IPv4", 00:16:35.860 "traddr": "10.0.0.1", 00:16:35.860 "trsvcid": "47202" 00:16:35.860 }, 00:16:35.860 "auth": { 00:16:35.860 "state": "completed", 00:16:35.860 "digest": "sha256", 00:16:35.860 "dhgroup": "ffdhe3072" 00:16:35.860 } 00:16:35.860 } 00:16:35.860 ]' 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.860 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.119 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:36.119 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:36.686 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.686 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:36.686 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.686 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.686 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.686 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.686 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.686 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.945 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.946 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.946 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.946 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.946 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.205 00:16:37.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.205 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.464 { 00:16:37.464 "cntlid": 21, 00:16:37.464 "qid": 0, 00:16:37.464 "state": "enabled", 00:16:37.464 "thread": "nvmf_tgt_poll_group_000", 00:16:37.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.464 "listen_address": { 00:16:37.464 "trtype": "TCP", 00:16:37.464 "adrfam": "IPv4", 00:16:37.464 "traddr": "10.0.0.2", 00:16:37.464 "trsvcid": "4420" 00:16:37.464 }, 00:16:37.464 "peer_address": { 00:16:37.464 "trtype": "TCP", 00:16:37.464 "adrfam": "IPv4", 00:16:37.464 "traddr": "10.0.0.1", 00:16:37.464 "trsvcid": "47240" 00:16:37.464 }, 00:16:37.464 "auth": { 00:16:37.464 "state": "completed", 00:16:37.464 "digest": "sha256", 00:16:37.464 "dhgroup": "ffdhe3072" 00:16:37.464 } 00:16:37.464 } 00:16:37.464 ]' 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.464 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.723 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.723 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.723 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.723 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:37.723 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.658 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.915 00:16:38.915 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.915 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.915 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.174 { 00:16:39.174 "cntlid": 23, 00:16:39.174 "qid": 0, 00:16:39.174 "state": "enabled", 00:16:39.174 "thread": "nvmf_tgt_poll_group_000", 00:16:39.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.174 "listen_address": { 00:16:39.174 "trtype": "TCP", 00:16:39.174 "adrfam": "IPv4", 00:16:39.174 "traddr": "10.0.0.2", 00:16:39.174 "trsvcid": "4420" 00:16:39.174 }, 00:16:39.174 "peer_address": { 00:16:39.174 "trtype": "TCP", 00:16:39.174 "adrfam": "IPv4", 00:16:39.174 "traddr": "10.0.0.1", 00:16:39.174 "trsvcid": "47280" 00:16:39.174 }, 00:16:39.174 "auth": { 00:16:39.174 "state": "completed", 00:16:39.174 "digest": "sha256", 00:16:39.174 "dhgroup": "ffdhe3072" 00:16:39.174 } 00:16:39.174 } 00:16:39.174 ]' 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.174 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.433 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:39.433 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.999 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.258 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.517 00:16:40.517 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.517 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.517 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.775 { 00:16:40.775 "cntlid": 25, 00:16:40.775 "qid": 0, 00:16:40.775 "state": "enabled", 00:16:40.775 "thread": "nvmf_tgt_poll_group_000", 00:16:40.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:40.775 "listen_address": { 00:16:40.775 "trtype": "TCP", 00:16:40.775 "adrfam": "IPv4", 00:16:40.775 "traddr": "10.0.0.2", 00:16:40.775 "trsvcid": "4420" 00:16:40.775 }, 00:16:40.775 "peer_address": { 00:16:40.775 "trtype": "TCP", 00:16:40.775 "adrfam": "IPv4", 00:16:40.775 "traddr": "10.0.0.1", 00:16:40.775 "trsvcid": "47316" 00:16:40.775 }, 00:16:40.775 "auth": { 00:16:40.775 "state": "completed", 00:16:40.775 "digest": "sha256", 00:16:40.775 "dhgroup": "ffdhe4096" 00:16:40.775 } 00:16:40.775 } 00:16:40.775 ]' 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.775 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.034 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.034 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.034 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.034 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:41.034 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:41.601 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.601 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.601 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.601 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.601 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.601 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.601 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.601 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.860 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.118 00:16:42.118 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.118 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.118 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.377 { 00:16:42.377 "cntlid": 27, 00:16:42.377 "qid": 0, 00:16:42.377 "state": "enabled", 00:16:42.377 "thread": "nvmf_tgt_poll_group_000", 00:16:42.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.377 "listen_address": { 00:16:42.377 "trtype": "TCP", 00:16:42.377 "adrfam": "IPv4", 00:16:42.377 "traddr": "10.0.0.2", 00:16:42.377 "trsvcid": "4420" 00:16:42.377 }, 00:16:42.377 "peer_address": { 00:16:42.377 "trtype": "TCP", 00:16:42.377 "adrfam": "IPv4", 00:16:42.377 "traddr": "10.0.0.1", 00:16:42.377 "trsvcid": "47336" 00:16:42.377 }, 00:16:42.377 "auth": { 00:16:42.377 "state": "completed", 00:16:42.377 "digest": "sha256", 00:16:42.377 "dhgroup": "ffdhe4096" 00:16:42.377 } 00:16:42.377 } 00:16:42.377 ]' 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.377 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.635 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:42.635 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:43.202 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.202 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.202 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.202 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.202 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.202 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.202 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.202 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.461 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.720 00:16:43.720 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.720 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.720 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.979 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.979 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.979 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.979 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.979 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.979 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.979 { 00:16:43.979 "cntlid": 29, 00:16:43.979 "qid": 0, 00:16:43.979 "state": "enabled", 00:16:43.979 "thread": "nvmf_tgt_poll_group_000", 00:16:43.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.979 "listen_address": { 00:16:43.979 "trtype": "TCP", 00:16:43.979 "adrfam": "IPv4", 00:16:43.979 "traddr": "10.0.0.2", 00:16:43.979 "trsvcid": "4420" 00:16:43.979 }, 00:16:43.979 "peer_address": { 00:16:43.979 "trtype": "TCP", 00:16:43.979 "adrfam": "IPv4", 00:16:43.979 "traddr": "10.0.0.1", 00:16:43.979 "trsvcid": "52816" 00:16:43.979 }, 00:16:43.979 "auth": { 00:16:43.979 "state": "completed", 00:16:43.979 "digest": "sha256", 00:16:43.979 "dhgroup": "ffdhe4096" 00:16:43.979 } 00:16:43.979 } 00:16:43.979 ]' 00:16:43.979 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.979 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.979 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.979 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.979 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.238 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.238 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.238 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.238 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:44.238 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:44.805 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.805 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.805 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.805 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.805 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.805 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.805 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.805 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.064 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.323 00:16:45.323 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.323 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.323 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.582 { 00:16:45.582 "cntlid": 31, 00:16:45.582 "qid": 0, 00:16:45.582 "state": "enabled", 00:16:45.582 "thread": "nvmf_tgt_poll_group_000", 00:16:45.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.582 "listen_address": { 00:16:45.582 "trtype": "TCP", 00:16:45.582 "adrfam": "IPv4", 00:16:45.582 "traddr": "10.0.0.2", 00:16:45.582 "trsvcid": "4420" 00:16:45.582 }, 00:16:45.582 "peer_address": { 00:16:45.582 "trtype": "TCP", 00:16:45.582 "adrfam": "IPv4", 00:16:45.582 "traddr": "10.0.0.1", 00:16:45.582 "trsvcid": "52848" 00:16:45.582 }, 00:16:45.582 "auth": { 00:16:45.582 "state": "completed", 00:16:45.582 "digest": "sha256", 00:16:45.582 "dhgroup": "ffdhe4096" 00:16:45.582 } 00:16:45.582 } 00:16:45.582 ]' 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.582 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.841 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.841 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.841 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.841 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.841 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.841 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:45.841 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.776 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.035 00:16:47.035 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.035 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.035 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.294 { 00:16:47.294 "cntlid": 33, 00:16:47.294 "qid": 0, 00:16:47.294 "state": "enabled", 00:16:47.294 "thread": "nvmf_tgt_poll_group_000", 00:16:47.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.294 "listen_address": { 00:16:47.294 "trtype": "TCP", 00:16:47.294 "adrfam": "IPv4", 00:16:47.294 "traddr": "10.0.0.2", 00:16:47.294 "trsvcid": "4420" 00:16:47.294 }, 00:16:47.294 "peer_address": { 00:16:47.294 "trtype": "TCP", 00:16:47.294 "adrfam": "IPv4", 00:16:47.294 "traddr": "10.0.0.1", 00:16:47.294 "trsvcid": "52868" 00:16:47.294 }, 00:16:47.294 "auth": { 00:16:47.294 "state": "completed", 00:16:47.294 "digest": "sha256", 00:16:47.294 "dhgroup": "ffdhe6144" 00:16:47.294 } 00:16:47.294 } 00:16:47.294 ]' 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.294 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.554 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.554 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.554 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.554 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.554 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.814 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:47.814 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:48.381 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.381 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.381 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.381 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.381 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.381 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.381 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.381 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.639 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:48.639 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.639 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.639 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.639 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.639 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.639 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.640 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.640 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.640 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.640 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.640 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.640 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.898 00:16:48.898 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.898 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.898 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.156 { 00:16:49.156 "cntlid": 35, 00:16:49.156 "qid": 0, 00:16:49.156 "state": "enabled", 00:16:49.156 "thread": "nvmf_tgt_poll_group_000", 00:16:49.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:49.156 "listen_address": { 00:16:49.156 "trtype": "TCP", 00:16:49.156 "adrfam": "IPv4", 00:16:49.156 "traddr": "10.0.0.2", 00:16:49.156 "trsvcid": "4420" 00:16:49.156 }, 00:16:49.156 "peer_address": { 00:16:49.156 "trtype": "TCP", 00:16:49.156 "adrfam": "IPv4", 00:16:49.156 "traddr": "10.0.0.1", 00:16:49.156 "trsvcid": "52894" 00:16:49.156 }, 00:16:49.156 "auth": { 00:16:49.156 "state": "completed", 00:16:49.156 "digest": "sha256", 00:16:49.156 "dhgroup": "ffdhe6144" 00:16:49.156 } 00:16:49.156 } 00:16:49.156 ]' 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.156 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.413 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:49.413 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:49.975 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.975 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.975 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.975 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.975 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.975 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.975 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.975 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.233 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.491 00:16:50.491 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.491 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.491 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.750 { 00:16:50.750 "cntlid": 37, 00:16:50.750 "qid": 0, 00:16:50.750 "state": "enabled", 00:16:50.750 "thread": "nvmf_tgt_poll_group_000", 00:16:50.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.750 "listen_address": { 00:16:50.750 "trtype": "TCP", 00:16:50.750 "adrfam": "IPv4", 00:16:50.750 "traddr": "10.0.0.2", 00:16:50.750 "trsvcid": "4420" 00:16:50.750 }, 00:16:50.750 "peer_address": { 00:16:50.750 "trtype": "TCP", 00:16:50.750 "adrfam": "IPv4", 00:16:50.750 "traddr": "10.0.0.1", 00:16:50.750 "trsvcid": "52922" 00:16:50.750 }, 00:16:50.750 "auth": { 00:16:50.750 "state": "completed", 00:16:50.750 "digest": "sha256", 00:16:50.750 "dhgroup": "ffdhe6144" 00:16:50.750 } 00:16:50.750 } 00:16:50.750 ]' 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.750 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.009 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.009 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.009 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.009 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:51.009 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:51.577 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.577 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.577 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.577 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.577 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.577 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.577 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.577 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.879 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.185 00:16:52.185 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.185 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.185 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.503 { 00:16:52.503 "cntlid": 39, 00:16:52.503 "qid": 0, 00:16:52.503 "state": "enabled", 00:16:52.503 "thread": "nvmf_tgt_poll_group_000", 00:16:52.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.503 "listen_address": { 00:16:52.503 "trtype": "TCP", 00:16:52.503 "adrfam": "IPv4", 00:16:52.503 "traddr": "10.0.0.2", 00:16:52.503 "trsvcid": "4420" 00:16:52.503 }, 00:16:52.503 "peer_address": { 00:16:52.503 "trtype": "TCP", 00:16:52.503 "adrfam": "IPv4", 00:16:52.503 "traddr": "10.0.0.1", 00:16:52.503 "trsvcid": "52942" 00:16:52.503 }, 00:16:52.503 "auth": { 00:16:52.503 "state": "completed", 00:16:52.503 "digest": "sha256", 00:16:52.503 "dhgroup": "ffdhe6144" 00:16:52.503 } 00:16:52.503 } 00:16:52.503 ]' 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.503 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.779 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:52.779 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.355 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.614 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.181 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.181 { 00:16:54.181 "cntlid": 41, 00:16:54.181 "qid": 0, 00:16:54.181 "state": "enabled", 00:16:54.181 "thread": "nvmf_tgt_poll_group_000", 00:16:54.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.181 "listen_address": { 00:16:54.181 "trtype": "TCP", 00:16:54.181 "adrfam": "IPv4", 00:16:54.181 "traddr": "10.0.0.2", 00:16:54.181 "trsvcid": "4420" 00:16:54.181 }, 00:16:54.181 "peer_address": { 00:16:54.181 "trtype": "TCP", 00:16:54.181 "adrfam": "IPv4", 00:16:54.181 "traddr": "10.0.0.1", 00:16:54.181 "trsvcid": "57730" 00:16:54.181 }, 00:16:54.181 "auth": { 00:16:54.181 "state": "completed", 00:16:54.181 "digest": "sha256", 00:16:54.181 "dhgroup": "ffdhe8192" 00:16:54.181 } 00:16:54.181 } 00:16:54.181 ]' 00:16:54.181 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.440 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.440 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.440 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.440 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.440 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.440 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.440 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.727 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:54.727 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.295 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.863 00:16:55.863 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.863 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.863 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.121 { 00:16:56.121 "cntlid": 43, 00:16:56.121 "qid": 0, 00:16:56.121 "state": "enabled", 00:16:56.121 "thread": "nvmf_tgt_poll_group_000", 00:16:56.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.121 "listen_address": { 00:16:56.121 "trtype": "TCP", 00:16:56.121 "adrfam": "IPv4", 00:16:56.121 "traddr": "10.0.0.2", 00:16:56.121 "trsvcid": "4420" 00:16:56.121 }, 00:16:56.121 "peer_address": { 00:16:56.121 "trtype": "TCP", 00:16:56.121 "adrfam": "IPv4", 00:16:56.121 "traddr": "10.0.0.1", 00:16:56.121 "trsvcid": "57752" 00:16:56.121 }, 00:16:56.121 "auth": { 00:16:56.121 "state": "completed", 00:16:56.121 "digest": "sha256", 00:16:56.121 "dhgroup": "ffdhe8192" 00:16:56.121 } 00:16:56.121 } 00:16:56.121 ]' 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.121 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.122 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.122 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.122 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.379 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:56.379 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:16:56.945 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.945 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.945 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.945 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.945 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.945 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.945 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.945 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.203 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.771 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.771 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.771 { 00:16:57.771 "cntlid": 45, 00:16:57.771 "qid": 0, 00:16:57.771 "state": "enabled", 00:16:57.771 "thread": "nvmf_tgt_poll_group_000", 00:16:57.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.771 "listen_address": { 00:16:57.771 "trtype": "TCP", 00:16:57.771 "adrfam": "IPv4", 00:16:57.771 "traddr": "10.0.0.2", 00:16:57.771 "trsvcid": "4420" 00:16:57.771 }, 00:16:57.771 "peer_address": { 00:16:57.771 "trtype": "TCP", 00:16:57.771 "adrfam": "IPv4", 00:16:57.771 "traddr": "10.0.0.1", 00:16:57.771 "trsvcid": "57778" 00:16:57.771 }, 00:16:57.771 "auth": { 00:16:57.771 "state": "completed", 00:16:57.771 "digest": "sha256", 00:16:57.771 "dhgroup": "ffdhe8192" 00:16:57.771 } 00:16:57.771 } 00:16:57.771 ]' 00:16:58.030 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.030 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.030 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.030 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.030 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.030 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.030 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.030 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.288 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:58.288 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:16:58.855 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.855 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.855 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.855 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.855 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.855 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.855 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.855 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.114 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:59.114 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.114 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.114 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.114 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.114 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.114 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:59.115 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.115 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.115 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.115 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.115 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.115 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.373 00:16:59.373 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.373 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.373 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.632 { 00:16:59.632 "cntlid": 47, 00:16:59.632 "qid": 0, 00:16:59.632 "state": "enabled", 00:16:59.632 "thread": "nvmf_tgt_poll_group_000", 00:16:59.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.632 "listen_address": { 00:16:59.632 "trtype": "TCP", 00:16:59.632 "adrfam": "IPv4", 00:16:59.632 "traddr": "10.0.0.2", 00:16:59.632 "trsvcid": "4420" 00:16:59.632 }, 00:16:59.632 "peer_address": { 00:16:59.632 "trtype": "TCP", 00:16:59.632 "adrfam": "IPv4", 00:16:59.632 "traddr": "10.0.0.1", 00:16:59.632 "trsvcid": "57810" 00:16:59.632 }, 00:16:59.632 "auth": { 00:16:59.632 "state": "completed", 00:16:59.632 "digest": "sha256", 00:16:59.632 "dhgroup": "ffdhe8192" 00:16:59.632 } 00:16:59.632 } 00:16:59.632 ]' 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.632 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.891 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.891 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.891 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.891 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.891 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.891 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:16:59.891 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.461 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.719 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.976 00:17:00.976 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.976 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.976 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.234 { 00:17:01.234 "cntlid": 49, 00:17:01.234 "qid": 0, 00:17:01.234 "state": "enabled", 00:17:01.234 "thread": "nvmf_tgt_poll_group_000", 00:17:01.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.234 "listen_address": { 00:17:01.234 "trtype": "TCP", 00:17:01.234 "adrfam": "IPv4", 00:17:01.234 "traddr": "10.0.0.2", 00:17:01.234 "trsvcid": "4420" 00:17:01.234 }, 00:17:01.234 "peer_address": { 00:17:01.234 "trtype": "TCP", 00:17:01.234 "adrfam": "IPv4", 00:17:01.234 "traddr": "10.0.0.1", 00:17:01.234 "trsvcid": "57840" 00:17:01.234 }, 00:17:01.234 "auth": { 00:17:01.234 "state": "completed", 00:17:01.234 "digest": "sha384", 00:17:01.234 "dhgroup": "null" 00:17:01.234 } 00:17:01.234 } 00:17:01.234 ]' 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.234 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.493 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.493 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.493 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.493 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:01.493 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:02.061 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.061 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.061 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.061 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.061 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.061 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.061 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.061 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.319 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.320 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.578 00:17:02.578 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.578 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.579 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.837 { 00:17:02.837 "cntlid": 51, 00:17:02.837 "qid": 0, 00:17:02.837 "state": "enabled", 00:17:02.837 "thread": "nvmf_tgt_poll_group_000", 00:17:02.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.837 "listen_address": { 00:17:02.837 "trtype": "TCP", 00:17:02.837 "adrfam": "IPv4", 00:17:02.837 "traddr": "10.0.0.2", 00:17:02.837 "trsvcid": "4420" 00:17:02.837 }, 00:17:02.837 "peer_address": { 00:17:02.837 "trtype": "TCP", 00:17:02.837 "adrfam": "IPv4", 00:17:02.837 "traddr": "10.0.0.1", 00:17:02.837 "trsvcid": "57876" 00:17:02.837 }, 00:17:02.837 "auth": { 00:17:02.837 "state": "completed", 00:17:02.837 "digest": "sha384", 00:17:02.837 "dhgroup": "null" 00:17:02.837 } 00:17:02.837 } 00:17:02.837 ]' 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.837 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.095 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:03.095 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:03.663 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.663 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.663 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.663 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.922 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.923 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.181 00:17:04.181 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.181 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.181 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.439 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.439 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.439 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.439 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.439 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.439 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.439 { 00:17:04.439 "cntlid": 53, 00:17:04.439 "qid": 0, 00:17:04.439 "state": "enabled", 00:17:04.439 "thread": "nvmf_tgt_poll_group_000", 00:17:04.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.439 "listen_address": { 00:17:04.439 "trtype": "TCP", 00:17:04.439 "adrfam": "IPv4", 00:17:04.439 "traddr": "10.0.0.2", 00:17:04.439 "trsvcid": "4420" 00:17:04.439 }, 00:17:04.439 "peer_address": { 00:17:04.439 "trtype": "TCP", 00:17:04.439 "adrfam": "IPv4", 00:17:04.439 "traddr": "10.0.0.1", 00:17:04.439 "trsvcid": "50342" 00:17:04.439 }, 00:17:04.439 "auth": { 00:17:04.440 "state": "completed", 00:17:04.440 "digest": "sha384", 00:17:04.440 "dhgroup": "null" 00:17:04.440 } 00:17:04.440 } 00:17:04.440 ]' 00:17:04.440 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.440 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.440 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.440 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.440 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.698 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.698 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.698 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.698 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:04.698 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:05.264 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.265 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.265 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.265 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.265 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.265 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.265 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.265 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.524 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.783 00:17:05.783 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.783 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.783 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.048 { 00:17:06.048 "cntlid": 55, 00:17:06.048 "qid": 0, 00:17:06.048 "state": "enabled", 00:17:06.048 "thread": "nvmf_tgt_poll_group_000", 00:17:06.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:06.048 "listen_address": { 00:17:06.048 "trtype": "TCP", 00:17:06.048 "adrfam": "IPv4", 00:17:06.048 "traddr": "10.0.0.2", 00:17:06.048 "trsvcid": "4420" 00:17:06.048 }, 00:17:06.048 "peer_address": { 00:17:06.048 "trtype": "TCP", 00:17:06.048 "adrfam": "IPv4", 00:17:06.048 "traddr": "10.0.0.1", 00:17:06.048 "trsvcid": "50366" 00:17:06.048 }, 00:17:06.048 "auth": { 00:17:06.048 "state": "completed", 00:17:06.048 "digest": "sha384", 00:17:06.048 "dhgroup": "null" 00:17:06.048 } 00:17:06.048 } 00:17:06.048 ]' 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.048 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.310 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.310 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.311 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.311 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:06.311 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.878 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.136 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.137 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.394 00:17:07.394 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.394 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.394 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.653 { 00:17:07.653 "cntlid": 57, 00:17:07.653 "qid": 0, 00:17:07.653 "state": "enabled", 00:17:07.653 "thread": "nvmf_tgt_poll_group_000", 00:17:07.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.653 "listen_address": { 00:17:07.653 "trtype": "TCP", 00:17:07.653 "adrfam": "IPv4", 00:17:07.653 "traddr": "10.0.0.2", 00:17:07.653 "trsvcid": "4420" 00:17:07.653 }, 00:17:07.653 "peer_address": { 00:17:07.653 "trtype": "TCP", 00:17:07.653 "adrfam": "IPv4", 00:17:07.653 "traddr": "10.0.0.1", 00:17:07.653 "trsvcid": "50394" 00:17:07.653 }, 00:17:07.653 "auth": { 00:17:07.653 "state": "completed", 00:17:07.653 "digest": "sha384", 00:17:07.653 "dhgroup": "ffdhe2048" 00:17:07.653 } 00:17:07.653 } 00:17:07.653 ]' 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.653 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.912 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:07.912 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:08.480 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.480 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.480 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.480 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.480 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.480 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.480 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.480 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.738 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.739 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.997 00:17:08.997 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.997 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.997 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.256 { 00:17:09.256 "cntlid": 59, 00:17:09.256 "qid": 0, 00:17:09.256 "state": "enabled", 00:17:09.256 "thread": "nvmf_tgt_poll_group_000", 00:17:09.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.256 "listen_address": { 00:17:09.256 "trtype": "TCP", 00:17:09.256 "adrfam": "IPv4", 00:17:09.256 "traddr": "10.0.0.2", 00:17:09.256 "trsvcid": "4420" 00:17:09.256 }, 00:17:09.256 "peer_address": { 00:17:09.256 "trtype": "TCP", 00:17:09.256 "adrfam": "IPv4", 00:17:09.256 "traddr": "10.0.0.1", 00:17:09.256 "trsvcid": "50432" 00:17:09.256 }, 00:17:09.256 "auth": { 00:17:09.256 "state": "completed", 00:17:09.256 "digest": "sha384", 00:17:09.256 "dhgroup": "ffdhe2048" 00:17:09.256 } 00:17:09.256 } 00:17:09.256 ]' 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.256 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.514 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:09.514 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:10.082 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.082 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.082 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.082 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.082 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.082 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.082 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.082 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.341 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.342 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.342 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.600 00:17:10.600 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.600 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.600 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.857 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.857 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.857 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.857 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.857 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.857 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.857 { 00:17:10.857 "cntlid": 61, 00:17:10.857 "qid": 0, 00:17:10.857 "state": "enabled", 00:17:10.857 "thread": "nvmf_tgt_poll_group_000", 00:17:10.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.857 "listen_address": { 00:17:10.857 "trtype": "TCP", 00:17:10.857 "adrfam": "IPv4", 00:17:10.857 "traddr": "10.0.0.2", 00:17:10.857 "trsvcid": "4420" 00:17:10.857 }, 00:17:10.857 "peer_address": { 00:17:10.857 "trtype": "TCP", 00:17:10.857 "adrfam": "IPv4", 00:17:10.857 "traddr": "10.0.0.1", 00:17:10.857 "trsvcid": "50456" 00:17:10.858 }, 00:17:10.858 "auth": { 00:17:10.858 "state": "completed", 00:17:10.858 "digest": "sha384", 00:17:10.858 "dhgroup": "ffdhe2048" 00:17:10.858 } 00:17:10.858 } 00:17:10.858 ]' 00:17:10.858 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.858 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.858 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.858 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.858 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.858 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.858 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.858 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.115 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:11.115 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:11.684 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.684 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.684 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.684 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.684 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.684 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.684 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.684 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.943 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.203 00:17:12.203 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.203 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.203 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.203 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.203 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.203 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.203 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.461 { 00:17:12.461 "cntlid": 63, 00:17:12.461 "qid": 0, 00:17:12.461 "state": "enabled", 00:17:12.461 "thread": "nvmf_tgt_poll_group_000", 00:17:12.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.461 "listen_address": { 00:17:12.461 "trtype": "TCP", 00:17:12.461 "adrfam": "IPv4", 00:17:12.461 "traddr": "10.0.0.2", 00:17:12.461 "trsvcid": "4420" 00:17:12.461 }, 00:17:12.461 "peer_address": { 00:17:12.461 "trtype": "TCP", 00:17:12.461 "adrfam": "IPv4", 00:17:12.461 "traddr": "10.0.0.1", 00:17:12.461 "trsvcid": "50484" 00:17:12.461 }, 00:17:12.461 "auth": { 00:17:12.461 "state": "completed", 00:17:12.461 "digest": "sha384", 00:17:12.461 "dhgroup": "ffdhe2048" 00:17:12.461 } 00:17:12.461 } 00:17:12.461 ]' 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.461 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.719 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:12.719 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.286 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.545 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.804 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.804 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.804 { 00:17:13.804 "cntlid": 65, 00:17:13.804 "qid": 0, 00:17:13.804 "state": "enabled", 00:17:13.804 "thread": "nvmf_tgt_poll_group_000", 00:17:13.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:13.804 "listen_address": { 00:17:13.804 "trtype": "TCP", 00:17:13.804 "adrfam": "IPv4", 00:17:13.804 "traddr": "10.0.0.2", 00:17:13.804 "trsvcid": "4420" 00:17:13.804 }, 00:17:13.804 "peer_address": { 00:17:13.804 "trtype": "TCP", 00:17:13.804 "adrfam": "IPv4", 00:17:13.804 "traddr": "10.0.0.1", 00:17:13.804 "trsvcid": "42656" 00:17:13.804 }, 00:17:13.804 "auth": { 00:17:13.805 "state": "completed", 00:17:13.805 "digest": "sha384", 00:17:13.805 "dhgroup": "ffdhe3072" 00:17:13.805 } 00:17:13.805 } 00:17:13.805 ]' 00:17:13.805 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.063 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.063 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.063 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.063 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.063 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.063 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.063 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.322 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:14.322 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.888 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.146 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.146 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.146 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.146 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.146 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.406 { 00:17:15.406 "cntlid": 67, 00:17:15.406 "qid": 0, 00:17:15.406 "state": "enabled", 00:17:15.406 "thread": "nvmf_tgt_poll_group_000", 00:17:15.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.406 "listen_address": { 00:17:15.406 "trtype": "TCP", 00:17:15.406 "adrfam": "IPv4", 00:17:15.406 "traddr": "10.0.0.2", 00:17:15.406 "trsvcid": "4420" 00:17:15.406 }, 00:17:15.406 "peer_address": { 00:17:15.406 "trtype": "TCP", 00:17:15.406 "adrfam": "IPv4", 00:17:15.406 "traddr": "10.0.0.1", 00:17:15.406 "trsvcid": "42688" 00:17:15.406 }, 00:17:15.406 "auth": { 00:17:15.406 "state": "completed", 00:17:15.406 "digest": "sha384", 00:17:15.406 "dhgroup": "ffdhe3072" 00:17:15.406 } 00:17:15.406 } 00:17:15.406 ]' 00:17:15.406 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.664 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.664 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.664 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.664 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.664 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.664 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.664 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.923 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:15.923 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.490 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.491 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.749 00:17:16.749 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.749 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.749 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.007 { 00:17:17.007 "cntlid": 69, 00:17:17.007 "qid": 0, 00:17:17.007 "state": "enabled", 00:17:17.007 "thread": "nvmf_tgt_poll_group_000", 00:17:17.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:17.007 "listen_address": { 00:17:17.007 "trtype": "TCP", 00:17:17.007 "adrfam": "IPv4", 00:17:17.007 "traddr": "10.0.0.2", 00:17:17.007 "trsvcid": "4420" 00:17:17.007 }, 00:17:17.007 "peer_address": { 00:17:17.007 "trtype": "TCP", 00:17:17.007 "adrfam": "IPv4", 00:17:17.007 "traddr": "10.0.0.1", 00:17:17.007 "trsvcid": "42712" 00:17:17.007 }, 00:17:17.007 "auth": { 00:17:17.007 "state": "completed", 00:17:17.007 "digest": "sha384", 00:17:17.007 "dhgroup": "ffdhe3072" 00:17:17.007 } 00:17:17.007 } 00:17:17.007 ]' 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.007 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.265 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.265 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.265 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.265 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.265 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.522 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:17.522 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:18.089 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.089 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:18.089 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.089 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.089 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.089 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.089 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.089 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.089 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.348 00:17:18.348 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.348 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.348 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.607 { 00:17:18.607 "cntlid": 71, 00:17:18.607 "qid": 0, 00:17:18.607 "state": "enabled", 00:17:18.607 "thread": "nvmf_tgt_poll_group_000", 00:17:18.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:18.607 "listen_address": { 00:17:18.607 "trtype": "TCP", 00:17:18.607 "adrfam": "IPv4", 00:17:18.607 "traddr": "10.0.0.2", 00:17:18.607 "trsvcid": "4420" 00:17:18.607 }, 00:17:18.607 "peer_address": { 00:17:18.607 "trtype": "TCP", 00:17:18.607 "adrfam": "IPv4", 00:17:18.607 "traddr": "10.0.0.1", 00:17:18.607 "trsvcid": "42742" 00:17:18.607 }, 00:17:18.607 "auth": { 00:17:18.607 "state": "completed", 00:17:18.607 "digest": "sha384", 00:17:18.607 "dhgroup": "ffdhe3072" 00:17:18.607 } 00:17:18.607 } 00:17:18.607 ]' 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.607 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.865 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.865 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.865 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.865 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.865 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.865 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:18.866 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:19.434 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.434 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.434 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.434 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.692 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.692 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.693 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.951 00:17:19.951 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.951 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.951 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.209 { 00:17:20.209 "cntlid": 73, 00:17:20.209 "qid": 0, 00:17:20.209 "state": "enabled", 00:17:20.209 "thread": "nvmf_tgt_poll_group_000", 00:17:20.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:20.209 "listen_address": { 00:17:20.209 "trtype": "TCP", 00:17:20.209 "adrfam": "IPv4", 00:17:20.209 "traddr": "10.0.0.2", 00:17:20.209 "trsvcid": "4420" 00:17:20.209 }, 00:17:20.209 "peer_address": { 00:17:20.209 "trtype": "TCP", 00:17:20.209 "adrfam": "IPv4", 00:17:20.209 "traddr": "10.0.0.1", 00:17:20.209 "trsvcid": "42760" 00:17:20.209 }, 00:17:20.209 "auth": { 00:17:20.209 "state": "completed", 00:17:20.209 "digest": "sha384", 00:17:20.209 "dhgroup": "ffdhe4096" 00:17:20.209 } 00:17:20.209 } 00:17:20.209 ]' 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.209 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.468 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.468 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.468 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.468 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.468 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.468 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:20.468 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:21.033 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.033 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:21.034 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.292 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.551 00:17:21.551 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.551 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.551 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.809 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.809 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.809 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.809 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.809 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.809 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.809 { 00:17:21.809 "cntlid": 75, 00:17:21.809 "qid": 0, 00:17:21.809 "state": "enabled", 00:17:21.809 "thread": "nvmf_tgt_poll_group_000", 00:17:21.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.809 "listen_address": { 00:17:21.809 "trtype": "TCP", 00:17:21.809 "adrfam": "IPv4", 00:17:21.809 "traddr": "10.0.0.2", 00:17:21.809 "trsvcid": "4420" 00:17:21.809 }, 00:17:21.809 "peer_address": { 00:17:21.809 "trtype": "TCP", 00:17:21.810 "adrfam": "IPv4", 00:17:21.810 "traddr": "10.0.0.1", 00:17:21.810 "trsvcid": "42794" 00:17:21.810 }, 00:17:21.810 "auth": { 00:17:21.810 "state": "completed", 00:17:21.810 "digest": "sha384", 00:17:21.810 "dhgroup": "ffdhe4096" 00:17:21.810 } 00:17:21.810 } 00:17:21.810 ]' 00:17:21.810 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.810 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.810 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.810 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.810 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.068 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.068 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.068 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.068 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:22.068 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:22.635 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.636 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.636 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.636 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.636 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.636 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.636 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.636 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.894 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.153 00:17:23.153 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.153 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.153 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.411 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.411 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.411 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.411 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.411 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.411 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.411 { 00:17:23.411 "cntlid": 77, 00:17:23.411 "qid": 0, 00:17:23.411 "state": "enabled", 00:17:23.411 "thread": "nvmf_tgt_poll_group_000", 00:17:23.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.411 "listen_address": { 00:17:23.411 "trtype": "TCP", 00:17:23.411 "adrfam": "IPv4", 00:17:23.411 "traddr": "10.0.0.2", 00:17:23.411 "trsvcid": "4420" 00:17:23.411 }, 00:17:23.411 "peer_address": { 00:17:23.411 "trtype": "TCP", 00:17:23.411 "adrfam": "IPv4", 00:17:23.411 "traddr": "10.0.0.1", 00:17:23.411 "trsvcid": "56172" 00:17:23.411 }, 00:17:23.411 "auth": { 00:17:23.411 "state": "completed", 00:17:23.411 "digest": "sha384", 00:17:23.412 "dhgroup": "ffdhe4096" 00:17:23.412 } 00:17:23.412 } 00:17:23.412 ]' 00:17:23.412 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.412 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.412 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.412 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.412 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.412 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.412 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.669 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.669 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:23.669 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:24.235 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.235 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.235 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.235 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.235 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.235 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.235 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.235 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.497 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.756 00:17:24.756 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.756 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.756 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.014 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.014 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.014 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.014 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.014 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.014 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.014 { 00:17:25.014 "cntlid": 79, 00:17:25.014 "qid": 0, 00:17:25.014 "state": "enabled", 00:17:25.014 "thread": "nvmf_tgt_poll_group_000", 00:17:25.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:25.014 "listen_address": { 00:17:25.014 "trtype": "TCP", 00:17:25.014 "adrfam": "IPv4", 00:17:25.014 "traddr": "10.0.0.2", 00:17:25.014 "trsvcid": "4420" 00:17:25.014 }, 00:17:25.014 "peer_address": { 00:17:25.014 "trtype": "TCP", 00:17:25.014 "adrfam": "IPv4", 00:17:25.014 "traddr": "10.0.0.1", 00:17:25.014 "trsvcid": "56200" 00:17:25.014 }, 00:17:25.014 "auth": { 00:17:25.014 "state": "completed", 00:17:25.014 "digest": "sha384", 00:17:25.014 "dhgroup": "ffdhe4096" 00:17:25.014 } 00:17:25.014 } 00:17:25.014 ]' 00:17:25.014 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.014 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.014 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.014 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.014 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.272 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.272 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.272 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.272 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:25.272 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:25.838 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.839 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.839 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.839 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.839 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.839 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.839 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.839 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.839 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.097 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.358 00:17:26.358 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.358 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.358 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.616 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.617 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.617 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.617 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.617 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.617 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.617 { 00:17:26.617 "cntlid": 81, 00:17:26.617 "qid": 0, 00:17:26.617 "state": "enabled", 00:17:26.617 "thread": "nvmf_tgt_poll_group_000", 00:17:26.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.617 "listen_address": { 00:17:26.617 "trtype": "TCP", 00:17:26.617 "adrfam": "IPv4", 00:17:26.617 "traddr": "10.0.0.2", 00:17:26.617 "trsvcid": "4420" 00:17:26.617 }, 00:17:26.617 "peer_address": { 00:17:26.617 "trtype": "TCP", 00:17:26.617 "adrfam": "IPv4", 00:17:26.617 "traddr": "10.0.0.1", 00:17:26.617 "trsvcid": "56242" 00:17:26.617 }, 00:17:26.617 "auth": { 00:17:26.617 "state": "completed", 00:17:26.617 "digest": "sha384", 00:17:26.617 "dhgroup": "ffdhe6144" 00:17:26.617 } 00:17:26.617 } 00:17:26.617 ]' 00:17:26.617 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.617 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.617 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.875 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.875 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.875 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.875 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.875 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.133 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:27.133 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.700 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.266 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.266 { 00:17:28.266 "cntlid": 83, 00:17:28.266 "qid": 0, 00:17:28.266 "state": "enabled", 00:17:28.266 "thread": "nvmf_tgt_poll_group_000", 00:17:28.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:28.266 "listen_address": { 00:17:28.266 "trtype": "TCP", 00:17:28.266 "adrfam": "IPv4", 00:17:28.266 "traddr": "10.0.0.2", 00:17:28.266 "trsvcid": "4420" 00:17:28.266 }, 00:17:28.266 "peer_address": { 00:17:28.266 "trtype": "TCP", 00:17:28.266 "adrfam": "IPv4", 00:17:28.266 "traddr": "10.0.0.1", 00:17:28.266 "trsvcid": "56264" 00:17:28.266 }, 00:17:28.266 "auth": { 00:17:28.266 "state": "completed", 00:17:28.266 "digest": "sha384", 00:17:28.266 "dhgroup": "ffdhe6144" 00:17:28.266 } 00:17:28.266 } 00:17:28.266 ]' 00:17:28.266 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.525 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.525 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.525 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.525 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.525 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.525 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.525 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.783 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:28.783 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:29.348 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.348 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.348 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.348 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.349 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.349 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.349 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.349 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.610 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.869 00:17:29.869 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.869 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.869 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.127 { 00:17:30.127 "cntlid": 85, 00:17:30.127 "qid": 0, 00:17:30.127 "state": "enabled", 00:17:30.127 "thread": "nvmf_tgt_poll_group_000", 00:17:30.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:30.127 "listen_address": { 00:17:30.127 "trtype": "TCP", 00:17:30.127 "adrfam": "IPv4", 00:17:30.127 "traddr": "10.0.0.2", 00:17:30.127 "trsvcid": "4420" 00:17:30.127 }, 00:17:30.127 "peer_address": { 00:17:30.127 "trtype": "TCP", 00:17:30.127 "adrfam": "IPv4", 00:17:30.127 "traddr": "10.0.0.1", 00:17:30.127 "trsvcid": "56276" 00:17:30.127 }, 00:17:30.127 "auth": { 00:17:30.127 "state": "completed", 00:17:30.127 "digest": "sha384", 00:17:30.127 "dhgroup": "ffdhe6144" 00:17:30.127 } 00:17:30.127 } 00:17:30.127 ]' 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.127 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.128 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.128 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.128 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.128 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.128 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.386 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:30.386 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:30.954 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.954 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.954 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.954 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.954 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.954 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.954 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.954 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.213 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.472 00:17:31.472 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.472 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.472 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.730 { 00:17:31.730 "cntlid": 87, 00:17:31.730 "qid": 0, 00:17:31.730 "state": "enabled", 00:17:31.730 "thread": "nvmf_tgt_poll_group_000", 00:17:31.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:31.730 "listen_address": { 00:17:31.730 "trtype": "TCP", 00:17:31.730 "adrfam": "IPv4", 00:17:31.730 "traddr": "10.0.0.2", 00:17:31.730 "trsvcid": "4420" 00:17:31.730 }, 00:17:31.730 "peer_address": { 00:17:31.730 "trtype": "TCP", 00:17:31.730 "adrfam": "IPv4", 00:17:31.730 "traddr": "10.0.0.1", 00:17:31.730 "trsvcid": "56298" 00:17:31.730 }, 00:17:31.730 "auth": { 00:17:31.730 "state": "completed", 00:17:31.730 "digest": "sha384", 00:17:31.730 "dhgroup": "ffdhe6144" 00:17:31.730 } 00:17:31.730 } 00:17:31.730 ]' 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.730 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.989 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:31.989 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.555 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.813 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.814 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.380 00:17:33.380 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.380 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.380 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.638 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.639 { 00:17:33.639 "cntlid": 89, 00:17:33.639 "qid": 0, 00:17:33.639 "state": "enabled", 00:17:33.639 "thread": "nvmf_tgt_poll_group_000", 00:17:33.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:33.639 "listen_address": { 00:17:33.639 "trtype": "TCP", 00:17:33.639 "adrfam": "IPv4", 00:17:33.639 "traddr": "10.0.0.2", 00:17:33.639 "trsvcid": "4420" 00:17:33.639 }, 00:17:33.639 "peer_address": { 00:17:33.639 "trtype": "TCP", 00:17:33.639 "adrfam": "IPv4", 00:17:33.639 "traddr": "10.0.0.1", 00:17:33.639 "trsvcid": "56696" 00:17:33.639 }, 00:17:33.639 "auth": { 00:17:33.639 "state": "completed", 00:17:33.639 "digest": "sha384", 00:17:33.639 "dhgroup": "ffdhe8192" 00:17:33.639 } 00:17:33.639 } 00:17:33.639 ]' 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.639 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.897 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:33.897 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:34.462 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.462 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:34.462 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.462 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.462 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.462 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.462 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.462 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.720 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.310 00:17:35.310 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.310 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.310 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.311 { 00:17:35.311 "cntlid": 91, 00:17:35.311 "qid": 0, 00:17:35.311 "state": "enabled", 00:17:35.311 "thread": "nvmf_tgt_poll_group_000", 00:17:35.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:35.311 "listen_address": { 00:17:35.311 "trtype": "TCP", 00:17:35.311 "adrfam": "IPv4", 00:17:35.311 "traddr": "10.0.0.2", 00:17:35.311 "trsvcid": "4420" 00:17:35.311 }, 00:17:35.311 "peer_address": { 00:17:35.311 "trtype": "TCP", 00:17:35.311 "adrfam": "IPv4", 00:17:35.311 "traddr": "10.0.0.1", 00:17:35.311 "trsvcid": "56720" 00:17:35.311 }, 00:17:35.311 "auth": { 00:17:35.311 "state": "completed", 00:17:35.311 "digest": "sha384", 00:17:35.311 "dhgroup": "ffdhe8192" 00:17:35.311 } 00:17:35.311 } 00:17:35.311 ]' 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.311 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.569 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.569 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.569 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.569 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:35.569 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:36.135 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.135 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:36.135 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.135 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.135 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.135 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.135 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.135 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.392 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:36.392 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.393 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.959 00:17:36.959 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.959 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.959 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.218 { 00:17:37.218 "cntlid": 93, 00:17:37.218 "qid": 0, 00:17:37.218 "state": "enabled", 00:17:37.218 "thread": "nvmf_tgt_poll_group_000", 00:17:37.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:37.218 "listen_address": { 00:17:37.218 "trtype": "TCP", 00:17:37.218 "adrfam": "IPv4", 00:17:37.218 "traddr": "10.0.0.2", 00:17:37.218 "trsvcid": "4420" 00:17:37.218 }, 00:17:37.218 "peer_address": { 00:17:37.218 "trtype": "TCP", 00:17:37.218 "adrfam": "IPv4", 00:17:37.218 "traddr": "10.0.0.1", 00:17:37.218 "trsvcid": "56754" 00:17:37.218 }, 00:17:37.218 "auth": { 00:17:37.218 "state": "completed", 00:17:37.218 "digest": "sha384", 00:17:37.218 "dhgroup": "ffdhe8192" 00:17:37.218 } 00:17:37.218 } 00:17:37.218 ]' 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.218 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.476 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:37.476 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:38.042 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.042 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.042 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.042 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.042 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.042 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.042 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.042 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.301 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.869 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.869 { 00:17:38.869 "cntlid": 95, 00:17:38.869 "qid": 0, 00:17:38.869 "state": "enabled", 00:17:38.869 "thread": "nvmf_tgt_poll_group_000", 00:17:38.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:38.869 "listen_address": { 00:17:38.869 "trtype": "TCP", 00:17:38.869 "adrfam": "IPv4", 00:17:38.869 "traddr": "10.0.0.2", 00:17:38.869 "trsvcid": "4420" 00:17:38.869 }, 00:17:38.869 "peer_address": { 00:17:38.869 "trtype": "TCP", 00:17:38.869 "adrfam": "IPv4", 00:17:38.869 "traddr": "10.0.0.1", 00:17:38.869 "trsvcid": "56782" 00:17:38.869 }, 00:17:38.869 "auth": { 00:17:38.869 "state": "completed", 00:17:38.869 "digest": "sha384", 00:17:38.869 "dhgroup": "ffdhe8192" 00:17:38.869 } 00:17:38.869 } 00:17:38.869 ]' 00:17:38.869 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.128 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.128 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.128 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.128 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.128 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.128 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.128 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.387 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:39.387 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.954 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.954 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.213 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.213 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.213 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.213 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.213 00:17:40.213 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.213 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.213 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.472 { 00:17:40.472 "cntlid": 97, 00:17:40.472 "qid": 0, 00:17:40.472 "state": "enabled", 00:17:40.472 "thread": "nvmf_tgt_poll_group_000", 00:17:40.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:40.472 "listen_address": { 00:17:40.472 "trtype": "TCP", 00:17:40.472 "adrfam": "IPv4", 00:17:40.472 "traddr": "10.0.0.2", 00:17:40.472 "trsvcid": "4420" 00:17:40.472 }, 00:17:40.472 "peer_address": { 00:17:40.472 "trtype": "TCP", 00:17:40.472 "adrfam": "IPv4", 00:17:40.472 "traddr": "10.0.0.1", 00:17:40.472 "trsvcid": "56818" 00:17:40.472 }, 00:17:40.472 "auth": { 00:17:40.472 "state": "completed", 00:17:40.472 "digest": "sha512", 00:17:40.472 "dhgroup": "null" 00:17:40.472 } 00:17:40.472 } 00:17:40.472 ]' 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.472 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.730 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:40.730 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.730 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.730 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.730 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.730 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:40.730 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:41.298 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.298 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.298 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.298 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.298 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.298 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.298 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.298 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.557 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.815 00:17:41.815 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.815 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.815 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.073 { 00:17:42.073 "cntlid": 99, 00:17:42.073 "qid": 0, 00:17:42.073 "state": "enabled", 00:17:42.073 "thread": "nvmf_tgt_poll_group_000", 00:17:42.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.073 "listen_address": { 00:17:42.073 "trtype": "TCP", 00:17:42.073 "adrfam": "IPv4", 00:17:42.073 "traddr": "10.0.0.2", 00:17:42.073 "trsvcid": "4420" 00:17:42.073 }, 00:17:42.073 "peer_address": { 00:17:42.073 "trtype": "TCP", 00:17:42.073 "adrfam": "IPv4", 00:17:42.073 "traddr": "10.0.0.1", 00:17:42.073 "trsvcid": "56846" 00:17:42.073 }, 00:17:42.073 "auth": { 00:17:42.073 "state": "completed", 00:17:42.073 "digest": "sha512", 00:17:42.073 "dhgroup": "null" 00:17:42.073 } 00:17:42.073 } 00:17:42.073 ]' 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.073 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.331 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:42.331 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:42.897 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.897 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.897 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.897 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.897 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.897 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.897 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.897 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.155 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.413 00:17:43.413 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.413 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.413 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.671 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.671 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.671 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.671 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.672 { 00:17:43.672 "cntlid": 101, 00:17:43.672 "qid": 0, 00:17:43.672 "state": "enabled", 00:17:43.672 "thread": "nvmf_tgt_poll_group_000", 00:17:43.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:43.672 "listen_address": { 00:17:43.672 "trtype": "TCP", 00:17:43.672 "adrfam": "IPv4", 00:17:43.672 "traddr": "10.0.0.2", 00:17:43.672 "trsvcid": "4420" 00:17:43.672 }, 00:17:43.672 "peer_address": { 00:17:43.672 "trtype": "TCP", 00:17:43.672 "adrfam": "IPv4", 00:17:43.672 "traddr": "10.0.0.1", 00:17:43.672 "trsvcid": "42286" 00:17:43.672 }, 00:17:43.672 "auth": { 00:17:43.672 "state": "completed", 00:17:43.672 "digest": "sha512", 00:17:43.672 "dhgroup": "null" 00:17:43.672 } 00:17:43.672 } 00:17:43.672 ]' 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.672 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.930 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:43.930 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:44.498 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.498 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.498 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.498 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.498 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.498 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.498 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.498 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.757 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:44.757 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.757 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.757 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:44.757 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.757 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.758 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:44.758 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.758 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.758 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.758 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.758 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.758 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.016 00:17:45.016 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.016 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.016 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.274 { 00:17:45.274 "cntlid": 103, 00:17:45.274 "qid": 0, 00:17:45.274 "state": "enabled", 00:17:45.274 "thread": "nvmf_tgt_poll_group_000", 00:17:45.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:45.274 "listen_address": { 00:17:45.274 "trtype": "TCP", 00:17:45.274 "adrfam": "IPv4", 00:17:45.274 "traddr": "10.0.0.2", 00:17:45.274 "trsvcid": "4420" 00:17:45.274 }, 00:17:45.274 "peer_address": { 00:17:45.274 "trtype": "TCP", 00:17:45.274 "adrfam": "IPv4", 00:17:45.274 "traddr": "10.0.0.1", 00:17:45.274 "trsvcid": "42312" 00:17:45.274 }, 00:17:45.274 "auth": { 00:17:45.274 "state": "completed", 00:17:45.274 "digest": "sha512", 00:17:45.274 "dhgroup": "null" 00:17:45.274 } 00:17:45.274 } 00:17:45.274 ]' 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.274 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.533 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:45.533 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.100 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.359 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.617 00:17:46.617 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.617 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.617 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.875 { 00:17:46.875 "cntlid": 105, 00:17:46.875 "qid": 0, 00:17:46.875 "state": "enabled", 00:17:46.875 "thread": "nvmf_tgt_poll_group_000", 00:17:46.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:46.875 "listen_address": { 00:17:46.875 "trtype": "TCP", 00:17:46.875 "adrfam": "IPv4", 00:17:46.875 "traddr": "10.0.0.2", 00:17:46.875 "trsvcid": "4420" 00:17:46.875 }, 00:17:46.875 "peer_address": { 00:17:46.875 "trtype": "TCP", 00:17:46.875 "adrfam": "IPv4", 00:17:46.875 "traddr": "10.0.0.1", 00:17:46.875 "trsvcid": "42344" 00:17:46.875 }, 00:17:46.875 "auth": { 00:17:46.875 "state": "completed", 00:17:46.875 "digest": "sha512", 00:17:46.875 "dhgroup": "ffdhe2048" 00:17:46.875 } 00:17:46.875 } 00:17:46.875 ]' 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.875 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.133 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:47.133 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:47.700 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.700 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.700 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.700 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.700 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.700 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.700 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.700 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.958 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.216 00:17:48.216 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.216 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.216 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.475 { 00:17:48.475 "cntlid": 107, 00:17:48.475 "qid": 0, 00:17:48.475 "state": "enabled", 00:17:48.475 "thread": "nvmf_tgt_poll_group_000", 00:17:48.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.475 "listen_address": { 00:17:48.475 "trtype": "TCP", 00:17:48.475 "adrfam": "IPv4", 00:17:48.475 "traddr": "10.0.0.2", 00:17:48.475 "trsvcid": "4420" 00:17:48.475 }, 00:17:48.475 "peer_address": { 00:17:48.475 "trtype": "TCP", 00:17:48.475 "adrfam": "IPv4", 00:17:48.475 "traddr": "10.0.0.1", 00:17:48.475 "trsvcid": "42376" 00:17:48.475 }, 00:17:48.475 "auth": { 00:17:48.475 "state": "completed", 00:17:48.475 "digest": "sha512", 00:17:48.475 "dhgroup": "ffdhe2048" 00:17:48.475 } 00:17:48.475 } 00:17:48.475 ]' 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.475 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.792 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:48.793 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.445 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.744 00:17:49.744 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.744 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.744 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.026 { 00:17:50.026 "cntlid": 109, 00:17:50.026 "qid": 0, 00:17:50.026 "state": "enabled", 00:17:50.026 "thread": "nvmf_tgt_poll_group_000", 00:17:50.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:50.026 "listen_address": { 00:17:50.026 "trtype": "TCP", 00:17:50.026 "adrfam": "IPv4", 00:17:50.026 "traddr": "10.0.0.2", 00:17:50.026 "trsvcid": "4420" 00:17:50.026 }, 00:17:50.026 "peer_address": { 00:17:50.026 "trtype": "TCP", 00:17:50.026 "adrfam": "IPv4", 00:17:50.026 "traddr": "10.0.0.1", 00:17:50.026 "trsvcid": "42408" 00:17:50.026 }, 00:17:50.026 "auth": { 00:17:50.026 "state": "completed", 00:17:50.026 "digest": "sha512", 00:17:50.026 "dhgroup": "ffdhe2048" 00:17:50.026 } 00:17:50.026 } 00:17:50.026 ]' 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.026 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.026 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.026 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.026 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.284 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:50.284 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:50.851 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.851 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:50.851 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.851 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.851 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.851 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.851 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.851 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.110 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.110 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.368 { 00:17:51.368 "cntlid": 111, 00:17:51.368 "qid": 0, 00:17:51.368 "state": "enabled", 00:17:51.368 "thread": "nvmf_tgt_poll_group_000", 00:17:51.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:51.368 "listen_address": { 00:17:51.368 "trtype": "TCP", 00:17:51.368 "adrfam": "IPv4", 00:17:51.368 "traddr": "10.0.0.2", 00:17:51.368 "trsvcid": "4420" 00:17:51.368 }, 00:17:51.368 "peer_address": { 00:17:51.368 "trtype": "TCP", 00:17:51.368 "adrfam": "IPv4", 00:17:51.368 "traddr": "10.0.0.1", 00:17:51.368 "trsvcid": "42442" 00:17:51.368 }, 00:17:51.368 "auth": { 00:17:51.368 "state": "completed", 00:17:51.368 "digest": "sha512", 00:17:51.368 "dhgroup": "ffdhe2048" 00:17:51.368 } 00:17:51.368 } 00:17:51.368 ]' 00:17:51.368 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.627 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.627 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.627 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.627 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.627 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.627 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.627 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.885 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:51.885 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.712 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.712 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.712 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.712 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.712 00:17:52.712 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.712 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.712 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.970 { 00:17:52.970 "cntlid": 113, 00:17:52.970 "qid": 0, 00:17:52.970 "state": "enabled", 00:17:52.970 "thread": "nvmf_tgt_poll_group_000", 00:17:52.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:52.970 "listen_address": { 00:17:52.970 "trtype": "TCP", 00:17:52.970 "adrfam": "IPv4", 00:17:52.970 "traddr": "10.0.0.2", 00:17:52.970 "trsvcid": "4420" 00:17:52.970 }, 00:17:52.970 "peer_address": { 00:17:52.970 "trtype": "TCP", 00:17:52.970 "adrfam": "IPv4", 00:17:52.970 "traddr": "10.0.0.1", 00:17:52.970 "trsvcid": "60950" 00:17:52.970 }, 00:17:52.970 "auth": { 00:17:52.970 "state": "completed", 00:17:52.970 "digest": "sha512", 00:17:52.970 "dhgroup": "ffdhe3072" 00:17:52.970 } 00:17:52.970 } 00:17:52.970 ]' 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.970 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.229 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.229 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.229 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.229 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.229 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.487 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:53.487 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:54.054 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.054 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.054 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.054 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.054 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.054 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.054 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.054 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.054 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.313 00:17:54.313 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.313 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.313 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.572 { 00:17:54.572 "cntlid": 115, 00:17:54.572 "qid": 0, 00:17:54.572 "state": "enabled", 00:17:54.572 "thread": "nvmf_tgt_poll_group_000", 00:17:54.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:54.572 "listen_address": { 00:17:54.572 "trtype": "TCP", 00:17:54.572 "adrfam": "IPv4", 00:17:54.572 "traddr": "10.0.0.2", 00:17:54.572 "trsvcid": "4420" 00:17:54.572 }, 00:17:54.572 "peer_address": { 00:17:54.572 "trtype": "TCP", 00:17:54.572 "adrfam": "IPv4", 00:17:54.572 "traddr": "10.0.0.1", 00:17:54.572 "trsvcid": "60958" 00:17:54.572 }, 00:17:54.572 "auth": { 00:17:54.572 "state": "completed", 00:17:54.572 "digest": "sha512", 00:17:54.572 "dhgroup": "ffdhe3072" 00:17:54.572 } 00:17:54.572 } 00:17:54.572 ]' 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.572 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.830 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.830 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.830 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.830 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:54.830 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:17:55.397 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.397 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.397 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.397 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.397 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.397 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.397 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.397 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.656 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.914 00:17:55.914 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.914 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.914 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.173 { 00:17:56.173 "cntlid": 117, 00:17:56.173 "qid": 0, 00:17:56.173 "state": "enabled", 00:17:56.173 "thread": "nvmf_tgt_poll_group_000", 00:17:56.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:56.173 "listen_address": { 00:17:56.173 "trtype": "TCP", 00:17:56.173 "adrfam": "IPv4", 00:17:56.173 "traddr": "10.0.0.2", 00:17:56.173 "trsvcid": "4420" 00:17:56.173 }, 00:17:56.173 "peer_address": { 00:17:56.173 "trtype": "TCP", 00:17:56.173 "adrfam": "IPv4", 00:17:56.173 "traddr": "10.0.0.1", 00:17:56.173 "trsvcid": "60984" 00:17:56.173 }, 00:17:56.173 "auth": { 00:17:56.173 "state": "completed", 00:17:56.173 "digest": "sha512", 00:17:56.173 "dhgroup": "ffdhe3072" 00:17:56.173 } 00:17:56.173 } 00:17:56.173 ]' 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.173 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.431 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:56.431 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:17:56.999 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.999 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:56.999 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.999 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.999 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.999 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.999 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.999 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.258 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.259 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.517 00:17:57.517 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.517 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.517 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.776 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.776 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.776 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.777 { 00:17:57.777 "cntlid": 119, 00:17:57.777 "qid": 0, 00:17:57.777 "state": "enabled", 00:17:57.777 "thread": "nvmf_tgt_poll_group_000", 00:17:57.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:57.777 "listen_address": { 00:17:57.777 "trtype": "TCP", 00:17:57.777 "adrfam": "IPv4", 00:17:57.777 "traddr": "10.0.0.2", 00:17:57.777 "trsvcid": "4420" 00:17:57.777 }, 00:17:57.777 "peer_address": { 00:17:57.777 "trtype": "TCP", 00:17:57.777 "adrfam": "IPv4", 00:17:57.777 "traddr": "10.0.0.1", 00:17:57.777 "trsvcid": "32770" 00:17:57.777 }, 00:17:57.777 "auth": { 00:17:57.777 "state": "completed", 00:17:57.777 "digest": "sha512", 00:17:57.777 "dhgroup": "ffdhe3072" 00:17:57.777 } 00:17:57.777 } 00:17:57.777 ]' 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.777 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.035 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:58.035 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.602 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.861 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.120 00:17:59.120 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.120 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.120 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.379 { 00:17:59.379 "cntlid": 121, 00:17:59.379 "qid": 0, 00:17:59.379 "state": "enabled", 00:17:59.379 "thread": "nvmf_tgt_poll_group_000", 00:17:59.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:59.379 "listen_address": { 00:17:59.379 "trtype": "TCP", 00:17:59.379 "adrfam": "IPv4", 00:17:59.379 "traddr": "10.0.0.2", 00:17:59.379 "trsvcid": "4420" 00:17:59.379 }, 00:17:59.379 "peer_address": { 00:17:59.379 "trtype": "TCP", 00:17:59.379 "adrfam": "IPv4", 00:17:59.379 "traddr": "10.0.0.1", 00:17:59.379 "trsvcid": "32802" 00:17:59.379 }, 00:17:59.379 "auth": { 00:17:59.379 "state": "completed", 00:17:59.379 "digest": "sha512", 00:17:59.379 "dhgroup": "ffdhe4096" 00:17:59.379 } 00:17:59.379 } 00:17:59.379 ]' 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.379 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.637 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:17:59.638 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:18:00.205 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.205 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.205 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.205 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.205 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.205 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.205 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.205 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.463 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:00.463 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.463 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.463 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:00.463 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.464 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.464 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.464 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.464 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.464 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.464 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.464 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.464 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.723 00:18:00.723 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.723 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.723 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.981 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.981 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.981 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.981 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.981 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.981 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.981 { 00:18:00.981 "cntlid": 123, 00:18:00.981 "qid": 0, 00:18:00.981 "state": "enabled", 00:18:00.981 "thread": "nvmf_tgt_poll_group_000", 00:18:00.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:00.981 "listen_address": { 00:18:00.981 "trtype": "TCP", 00:18:00.981 "adrfam": "IPv4", 00:18:00.981 "traddr": "10.0.0.2", 00:18:00.981 "trsvcid": "4420" 00:18:00.981 }, 00:18:00.981 "peer_address": { 00:18:00.981 "trtype": "TCP", 00:18:00.981 "adrfam": "IPv4", 00:18:00.981 "traddr": "10.0.0.1", 00:18:00.981 "trsvcid": "32830" 00:18:00.981 }, 00:18:00.981 "auth": { 00:18:00.981 "state": "completed", 00:18:00.981 "digest": "sha512", 00:18:00.981 "dhgroup": "ffdhe4096" 00:18:00.981 } 00:18:00.981 } 00:18:00.981 ]' 00:18:00.981 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.982 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.982 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.982 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.982 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.982 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.982 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.982 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.240 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:18:01.240 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:18:01.806 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.806 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:01.806 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.806 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.806 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.806 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.806 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.806 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.064 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.065 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.323 00:18:02.323 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.323 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.323 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.582 { 00:18:02.582 "cntlid": 125, 00:18:02.582 "qid": 0, 00:18:02.582 "state": "enabled", 00:18:02.582 "thread": "nvmf_tgt_poll_group_000", 00:18:02.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.582 "listen_address": { 00:18:02.582 "trtype": "TCP", 00:18:02.582 "adrfam": "IPv4", 00:18:02.582 "traddr": "10.0.0.2", 00:18:02.582 "trsvcid": "4420" 00:18:02.582 }, 00:18:02.582 "peer_address": { 00:18:02.582 "trtype": "TCP", 00:18:02.582 "adrfam": "IPv4", 00:18:02.582 "traddr": "10.0.0.1", 00:18:02.582 "trsvcid": "32860" 00:18:02.582 }, 00:18:02.582 "auth": { 00:18:02.582 "state": "completed", 00:18:02.582 "digest": "sha512", 00:18:02.582 "dhgroup": "ffdhe4096" 00:18:02.582 } 00:18:02.582 } 00:18:02.582 ]' 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.582 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.841 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:18:02.841 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:18:03.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.408 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.666 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.925 00:18:03.925 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.925 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.925 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.183 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.184 { 00:18:04.184 "cntlid": 127, 00:18:04.184 "qid": 0, 00:18:04.184 "state": "enabled", 00:18:04.184 "thread": "nvmf_tgt_poll_group_000", 00:18:04.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:04.184 "listen_address": { 00:18:04.184 "trtype": "TCP", 00:18:04.184 "adrfam": "IPv4", 00:18:04.184 "traddr": "10.0.0.2", 00:18:04.184 "trsvcid": "4420" 00:18:04.184 }, 00:18:04.184 "peer_address": { 00:18:04.184 "trtype": "TCP", 00:18:04.184 "adrfam": "IPv4", 00:18:04.184 "traddr": "10.0.0.1", 00:18:04.184 "trsvcid": "54462" 00:18:04.184 }, 00:18:04.184 "auth": { 00:18:04.184 "state": "completed", 00:18:04.184 "digest": "sha512", 00:18:04.184 "dhgroup": "ffdhe4096" 00:18:04.184 } 00:18:04.184 } 00:18:04.184 ]' 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.184 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.442 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:04.442 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:05.008 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.008 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.008 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.008 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.008 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.008 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.008 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.008 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.008 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.291 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.550 00:18:05.550 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.550 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.550 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.807 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.807 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.807 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.807 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.807 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.807 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.807 { 00:18:05.807 "cntlid": 129, 00:18:05.808 "qid": 0, 00:18:05.808 "state": "enabled", 00:18:05.808 "thread": "nvmf_tgt_poll_group_000", 00:18:05.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:05.808 "listen_address": { 00:18:05.808 "trtype": "TCP", 00:18:05.808 "adrfam": "IPv4", 00:18:05.808 "traddr": "10.0.0.2", 00:18:05.808 "trsvcid": "4420" 00:18:05.808 }, 00:18:05.808 "peer_address": { 00:18:05.808 "trtype": "TCP", 00:18:05.808 "adrfam": "IPv4", 00:18:05.808 "traddr": "10.0.0.1", 00:18:05.808 "trsvcid": "54496" 00:18:05.808 }, 00:18:05.808 "auth": { 00:18:05.808 "state": "completed", 00:18:05.808 "digest": "sha512", 00:18:05.808 "dhgroup": "ffdhe6144" 00:18:05.808 } 00:18:05.808 } 00:18:05.808 ]' 00:18:05.808 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.808 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.808 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.808 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.808 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.808 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.808 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.808 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.065 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:18:06.065 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:18:06.630 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.630 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.630 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.630 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.630 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.630 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.630 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.630 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.925 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.183 00:18:07.183 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.183 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.183 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.441 { 00:18:07.441 "cntlid": 131, 00:18:07.441 "qid": 0, 00:18:07.441 "state": "enabled", 00:18:07.441 "thread": "nvmf_tgt_poll_group_000", 00:18:07.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.441 "listen_address": { 00:18:07.441 "trtype": "TCP", 00:18:07.441 "adrfam": "IPv4", 00:18:07.441 "traddr": "10.0.0.2", 00:18:07.441 "trsvcid": "4420" 00:18:07.441 }, 00:18:07.441 "peer_address": { 00:18:07.441 "trtype": "TCP", 00:18:07.441 "adrfam": "IPv4", 00:18:07.441 "traddr": "10.0.0.1", 00:18:07.441 "trsvcid": "54512" 00:18:07.441 }, 00:18:07.441 "auth": { 00:18:07.441 "state": "completed", 00:18:07.441 "digest": "sha512", 00:18:07.441 "dhgroup": "ffdhe6144" 00:18:07.441 } 00:18:07.441 } 00:18:07.441 ]' 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.441 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.700 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.700 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.700 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.700 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.700 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.700 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:18:07.700 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.633 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.891 00:18:08.891 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.891 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.891 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.150 { 00:18:09.150 "cntlid": 133, 00:18:09.150 "qid": 0, 00:18:09.150 "state": "enabled", 00:18:09.150 "thread": "nvmf_tgt_poll_group_000", 00:18:09.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:09.150 "listen_address": { 00:18:09.150 "trtype": "TCP", 00:18:09.150 "adrfam": "IPv4", 00:18:09.150 "traddr": "10.0.0.2", 00:18:09.150 "trsvcid": "4420" 00:18:09.150 }, 00:18:09.150 "peer_address": { 00:18:09.150 "trtype": "TCP", 00:18:09.150 "adrfam": "IPv4", 00:18:09.150 "traddr": "10.0.0.1", 00:18:09.150 "trsvcid": "54540" 00:18:09.150 }, 00:18:09.150 "auth": { 00:18:09.150 "state": "completed", 00:18:09.150 "digest": "sha512", 00:18:09.150 "dhgroup": "ffdhe6144" 00:18:09.150 } 00:18:09.150 } 00:18:09.150 ]' 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.150 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.408 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.408 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.408 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.408 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.408 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.666 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:18:09.666 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.232 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.800 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.800 { 00:18:10.800 "cntlid": 135, 00:18:10.800 "qid": 0, 00:18:10.800 "state": "enabled", 00:18:10.800 "thread": "nvmf_tgt_poll_group_000", 00:18:10.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.800 "listen_address": { 00:18:10.800 "trtype": "TCP", 00:18:10.800 "adrfam": "IPv4", 00:18:10.800 "traddr": "10.0.0.2", 00:18:10.800 "trsvcid": "4420" 00:18:10.800 }, 00:18:10.800 "peer_address": { 00:18:10.800 "trtype": "TCP", 00:18:10.800 "adrfam": "IPv4", 00:18:10.800 "traddr": "10.0.0.1", 00:18:10.800 "trsvcid": "54562" 00:18:10.800 }, 00:18:10.800 "auth": { 00:18:10.800 "state": "completed", 00:18:10.800 "digest": "sha512", 00:18:10.800 "dhgroup": "ffdhe6144" 00:18:10.800 } 00:18:10.800 } 00:18:10.800 ]' 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.800 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.059 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.059 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.059 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.059 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.059 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.317 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:11.317 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.885 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.452 00:18:12.452 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.452 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.452 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.710 { 00:18:12.710 "cntlid": 137, 00:18:12.710 "qid": 0, 00:18:12.710 "state": "enabled", 00:18:12.710 "thread": "nvmf_tgt_poll_group_000", 00:18:12.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.710 "listen_address": { 00:18:12.710 "trtype": "TCP", 00:18:12.710 "adrfam": "IPv4", 00:18:12.710 "traddr": "10.0.0.2", 00:18:12.710 "trsvcid": "4420" 00:18:12.710 }, 00:18:12.710 "peer_address": { 00:18:12.710 "trtype": "TCP", 00:18:12.710 "adrfam": "IPv4", 00:18:12.710 "traddr": "10.0.0.1", 00:18:12.710 "trsvcid": "54590" 00:18:12.710 }, 00:18:12.710 "auth": { 00:18:12.710 "state": "completed", 00:18:12.710 "digest": "sha512", 00:18:12.710 "dhgroup": "ffdhe8192" 00:18:12.710 } 00:18:12.710 } 00:18:12.710 ]' 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.710 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.969 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:18:12.969 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:18:13.537 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.537 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:13.537 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.537 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.537 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.537 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.537 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.796 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.363 00:18:14.363 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.363 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.363 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.363 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.363 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.363 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.363 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.621 { 00:18:14.621 "cntlid": 139, 00:18:14.621 "qid": 0, 00:18:14.621 "state": "enabled", 00:18:14.621 "thread": "nvmf_tgt_poll_group_000", 00:18:14.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:14.621 "listen_address": { 00:18:14.621 "trtype": "TCP", 00:18:14.621 "adrfam": "IPv4", 00:18:14.621 "traddr": "10.0.0.2", 00:18:14.621 "trsvcid": "4420" 00:18:14.621 }, 00:18:14.621 "peer_address": { 00:18:14.621 "trtype": "TCP", 00:18:14.621 "adrfam": "IPv4", 00:18:14.621 "traddr": "10.0.0.1", 00:18:14.621 "trsvcid": "39520" 00:18:14.621 }, 00:18:14.621 "auth": { 00:18:14.621 "state": "completed", 00:18:14.621 "digest": "sha512", 00:18:14.621 "dhgroup": "ffdhe8192" 00:18:14.621 } 00:18:14.621 } 00:18:14.621 ]' 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.621 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.879 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:18:14.879 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: --dhchap-ctrl-secret DHHC-1:02:NGIzNmEwOGExZTAzMTFlNzNhMTQ0OGY0ZGJiNDdhY2QzNDBjMTU1Zjg0NThjYTdipuOQxA==: 00:18:15.445 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.445 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:15.445 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.445 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.445 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.445 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.445 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.445 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.703 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:15.703 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.704 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.962 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.224 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.224 { 00:18:16.224 "cntlid": 141, 00:18:16.224 "qid": 0, 00:18:16.224 "state": "enabled", 00:18:16.224 "thread": "nvmf_tgt_poll_group_000", 00:18:16.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:16.224 "listen_address": { 00:18:16.224 "trtype": "TCP", 00:18:16.225 "adrfam": "IPv4", 00:18:16.225 "traddr": "10.0.0.2", 00:18:16.225 "trsvcid": "4420" 00:18:16.225 }, 00:18:16.225 "peer_address": { 00:18:16.225 "trtype": "TCP", 00:18:16.225 "adrfam": "IPv4", 00:18:16.225 "traddr": "10.0.0.1", 00:18:16.225 "trsvcid": "39550" 00:18:16.225 }, 00:18:16.225 "auth": { 00:18:16.225 "state": "completed", 00:18:16.225 "digest": "sha512", 00:18:16.225 "dhgroup": "ffdhe8192" 00:18:16.225 } 00:18:16.225 } 00:18:16.225 ]' 00:18:16.225 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.483 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.483 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.483 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.483 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.483 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.483 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.483 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.741 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:18:16.741 19:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:01:ZThlNjMwZWM2YTQyNGRhYTM0Yjc0ODU2NGY0ZWQwMjRoHdpO: 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.307 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.875 00:18:17.875 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.875 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.875 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.133 { 00:18:18.133 "cntlid": 143, 00:18:18.133 "qid": 0, 00:18:18.133 "state": "enabled", 00:18:18.133 "thread": "nvmf_tgt_poll_group_000", 00:18:18.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:18.133 "listen_address": { 00:18:18.133 "trtype": "TCP", 00:18:18.133 "adrfam": "IPv4", 00:18:18.133 "traddr": "10.0.0.2", 00:18:18.133 "trsvcid": "4420" 00:18:18.133 }, 00:18:18.133 "peer_address": { 00:18:18.133 "trtype": "TCP", 00:18:18.133 "adrfam": "IPv4", 00:18:18.133 "traddr": "10.0.0.1", 00:18:18.133 "trsvcid": "39574" 00:18:18.133 }, 00:18:18.133 "auth": { 00:18:18.133 "state": "completed", 00:18:18.133 "digest": "sha512", 00:18:18.133 "dhgroup": "ffdhe8192" 00:18:18.133 } 00:18:18.133 } 00:18:18.133 ]' 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.133 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.391 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.391 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.391 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.391 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:18.391 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:18.957 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.957 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.216 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.782 00:18:19.783 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.783 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.783 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.041 { 00:18:20.041 "cntlid": 145, 00:18:20.041 "qid": 0, 00:18:20.041 "state": "enabled", 00:18:20.041 "thread": "nvmf_tgt_poll_group_000", 00:18:20.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:20.041 "listen_address": { 00:18:20.041 "trtype": "TCP", 00:18:20.041 "adrfam": "IPv4", 00:18:20.041 "traddr": "10.0.0.2", 00:18:20.041 "trsvcid": "4420" 00:18:20.041 }, 00:18:20.041 "peer_address": { 00:18:20.041 "trtype": "TCP", 00:18:20.041 "adrfam": "IPv4", 00:18:20.041 "traddr": "10.0.0.1", 00:18:20.041 "trsvcid": "39596" 00:18:20.041 }, 00:18:20.041 "auth": { 00:18:20.041 "state": "completed", 00:18:20.041 "digest": "sha512", 00:18:20.041 "dhgroup": "ffdhe8192" 00:18:20.041 } 00:18:20.041 } 00:18:20.041 ]' 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.041 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.041 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.041 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.041 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.041 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.041 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.301 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:18:20.301 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzExZDNkMzk5MTBmNWY5NzQxNTVjMzM2ZDJkODI0OTNmZTEyOWMzM2E1ODU4YjE4ZbPXgA==: --dhchap-ctrl-secret DHHC-1:03:MzE0NGQ5ZTFiOTc3ODg3MzlkM2RkZWQ4ZjdkMjU2ZDhjYzAzNTYxYmIxNTA4ZTc5MmE3ZjU4NjMyOWVmODA2ORc2qvI=: 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:20.868 19:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:21.434 request: 00:18:21.434 { 00:18:21.434 "name": "nvme0", 00:18:21.434 "trtype": "tcp", 00:18:21.434 "traddr": "10.0.0.2", 00:18:21.434 "adrfam": "ipv4", 00:18:21.434 "trsvcid": "4420", 00:18:21.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:21.434 "prchk_reftag": false, 00:18:21.434 "prchk_guard": false, 00:18:21.434 "hdgst": false, 00:18:21.434 "ddgst": false, 00:18:21.434 "dhchap_key": "key2", 00:18:21.434 "allow_unrecognized_csi": false, 00:18:21.434 "method": "bdev_nvme_attach_controller", 00:18:21.434 "req_id": 1 00:18:21.434 } 00:18:21.434 Got JSON-RPC error response 00:18:21.434 response: 00:18:21.434 { 00:18:21.434 "code": -5, 00:18:21.434 "message": "Input/output error" 00:18:21.434 } 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.434 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.435 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.435 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.693 request: 00:18:21.693 { 00:18:21.693 "name": "nvme0", 00:18:21.693 "trtype": "tcp", 00:18:21.693 "traddr": "10.0.0.2", 00:18:21.693 "adrfam": "ipv4", 00:18:21.693 "trsvcid": "4420", 00:18:21.693 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:21.693 "prchk_reftag": false, 00:18:21.693 "prchk_guard": false, 00:18:21.693 "hdgst": false, 00:18:21.693 "ddgst": false, 00:18:21.693 "dhchap_key": "key1", 00:18:21.693 "dhchap_ctrlr_key": "ckey2", 00:18:21.693 "allow_unrecognized_csi": false, 00:18:21.693 "method": "bdev_nvme_attach_controller", 00:18:21.693 "req_id": 1 00:18:21.693 } 00:18:21.693 Got JSON-RPC error response 00:18:21.693 response: 00:18:21.693 { 00:18:21.693 "code": -5, 00:18:21.693 "message": "Input/output error" 00:18:21.693 } 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.693 19:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.261 request: 00:18:22.261 { 00:18:22.261 "name": "nvme0", 00:18:22.261 "trtype": "tcp", 00:18:22.261 "traddr": "10.0.0.2", 00:18:22.261 "adrfam": "ipv4", 00:18:22.261 "trsvcid": "4420", 00:18:22.261 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:22.261 "prchk_reftag": false, 00:18:22.261 "prchk_guard": false, 00:18:22.261 "hdgst": false, 00:18:22.261 "ddgst": false, 00:18:22.261 "dhchap_key": "key1", 00:18:22.261 "dhchap_ctrlr_key": "ckey1", 00:18:22.261 "allow_unrecognized_csi": false, 00:18:22.261 "method": "bdev_nvme_attach_controller", 00:18:22.261 "req_id": 1 00:18:22.261 } 00:18:22.261 Got JSON-RPC error response 00:18:22.261 response: 00:18:22.261 { 00:18:22.261 "code": -5, 00:18:22.261 "message": "Input/output error" 00:18:22.261 } 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3723312 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3723312 ']' 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3723312 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.261 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723312 00:18:22.262 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.262 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.262 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723312' 00:18:22.262 killing process with pid 3723312 00:18:22.262 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3723312 00:18:22.262 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3723312 00:18:22.521 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:22.521 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:22.521 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.521 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3746392 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3746392 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3746392 ']' 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.522 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.780 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3746392 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3746392 ']' 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.781 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.039 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.039 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:23.039 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:23.039 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.039 19:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.039 null0 00:18:23.039 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.039 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.D6T 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.QdT ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QdT 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.G8p 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Egz ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Egz 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AlH 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.JKB ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JKB 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2D2 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.040 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.976 nvme0n1 00:18:23.976 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.977 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.977 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.977 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.977 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.977 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.977 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.977 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.977 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.977 { 00:18:23.977 "cntlid": 1, 00:18:23.977 "qid": 0, 00:18:23.977 "state": "enabled", 00:18:23.977 "thread": "nvmf_tgt_poll_group_000", 00:18:23.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:23.977 "listen_address": { 00:18:23.977 "trtype": "TCP", 00:18:23.977 "adrfam": "IPv4", 00:18:23.977 "traddr": "10.0.0.2", 00:18:23.977 "trsvcid": "4420" 00:18:23.977 }, 00:18:23.977 "peer_address": { 00:18:23.977 "trtype": "TCP", 00:18:23.977 "adrfam": "IPv4", 00:18:23.977 "traddr": "10.0.0.1", 00:18:23.977 "trsvcid": "33804" 00:18:23.977 }, 00:18:23.977 "auth": { 00:18:23.977 "state": "completed", 00:18:23.977 "digest": "sha512", 00:18:23.977 "dhgroup": "ffdhe8192" 00:18:23.977 } 00:18:23.977 } 00:18:23.977 ]' 00:18:23.977 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.235 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.235 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.235 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.235 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.235 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.235 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.235 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.494 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:24.494 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:25.062 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.062 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.062 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.062 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.062 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.062 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:25.063 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.063 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.063 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.063 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:25.063 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.322 request: 00:18:25.322 { 00:18:25.322 "name": "nvme0", 00:18:25.322 "trtype": "tcp", 00:18:25.322 "traddr": "10.0.0.2", 00:18:25.322 "adrfam": "ipv4", 00:18:25.322 "trsvcid": "4420", 00:18:25.322 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.322 "prchk_reftag": false, 00:18:25.322 "prchk_guard": false, 00:18:25.322 "hdgst": false, 00:18:25.322 "ddgst": false, 00:18:25.322 "dhchap_key": "key3", 00:18:25.322 "allow_unrecognized_csi": false, 00:18:25.322 "method": "bdev_nvme_attach_controller", 00:18:25.322 "req_id": 1 00:18:25.322 } 00:18:25.322 Got JSON-RPC error response 00:18:25.322 response: 00:18:25.322 { 00:18:25.322 "code": -5, 00:18:25.322 "message": "Input/output error" 00:18:25.322 } 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:25.322 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.580 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.839 request: 00:18:25.839 { 00:18:25.839 "name": "nvme0", 00:18:25.839 "trtype": "tcp", 00:18:25.839 "traddr": "10.0.0.2", 00:18:25.839 "adrfam": "ipv4", 00:18:25.839 "trsvcid": "4420", 00:18:25.839 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.839 "prchk_reftag": false, 00:18:25.839 "prchk_guard": false, 00:18:25.839 "hdgst": false, 00:18:25.839 "ddgst": false, 00:18:25.839 "dhchap_key": "key3", 00:18:25.839 "allow_unrecognized_csi": false, 00:18:25.839 "method": "bdev_nvme_attach_controller", 00:18:25.839 "req_id": 1 00:18:25.839 } 00:18:25.839 Got JSON-RPC error response 00:18:25.839 response: 00:18:25.839 { 00:18:25.839 "code": -5, 00:18:25.839 "message": "Input/output error" 00:18:25.839 } 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.839 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.097 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.098 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.098 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.098 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.355 request: 00:18:26.355 { 00:18:26.355 "name": "nvme0", 00:18:26.355 "trtype": "tcp", 00:18:26.355 "traddr": "10.0.0.2", 00:18:26.355 "adrfam": "ipv4", 00:18:26.355 "trsvcid": "4420", 00:18:26.355 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.355 "prchk_reftag": false, 00:18:26.355 "prchk_guard": false, 00:18:26.355 "hdgst": false, 00:18:26.355 "ddgst": false, 00:18:26.355 "dhchap_key": "key0", 00:18:26.355 "dhchap_ctrlr_key": "key1", 00:18:26.355 "allow_unrecognized_csi": false, 00:18:26.355 "method": "bdev_nvme_attach_controller", 00:18:26.355 "req_id": 1 00:18:26.355 } 00:18:26.355 Got JSON-RPC error response 00:18:26.355 response: 00:18:26.355 { 00:18:26.355 "code": -5, 00:18:26.355 "message": "Input/output error" 00:18:26.355 } 00:18:26.355 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:26.355 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.355 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.355 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.355 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:26.355 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:26.356 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:26.613 nvme0n1 00:18:26.613 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:26.613 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:26.613 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.872 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.872 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.872 19:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.131 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:27.131 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.131 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.131 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.131 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:27.131 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:27.131 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:27.698 nvme0n1 00:18:27.957 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:27.957 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:27.957 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.957 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.957 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.957 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.957 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.957 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.957 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:27.957 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:27.957 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.215 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.215 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:28.215 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: --dhchap-ctrl-secret DHHC-1:03:MWFhNDFjZjNiMTUzNjNkNGZhOGM3YzY4OTViMDcxYzlkZjlhYzkyMDZjMzRhODA1YjIwMGVjOWQ0ZDFlMTEwNN/K0Gg=: 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.783 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:29.043 19:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:29.302 request: 00:18:29.302 { 00:18:29.302 "name": "nvme0", 00:18:29.302 "trtype": "tcp", 00:18:29.302 "traddr": "10.0.0.2", 00:18:29.302 "adrfam": "ipv4", 00:18:29.302 "trsvcid": "4420", 00:18:29.302 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:29.302 "prchk_reftag": false, 00:18:29.302 "prchk_guard": false, 00:18:29.302 "hdgst": false, 00:18:29.302 "ddgst": false, 00:18:29.302 "dhchap_key": "key1", 00:18:29.302 "allow_unrecognized_csi": false, 00:18:29.302 "method": "bdev_nvme_attach_controller", 00:18:29.302 "req_id": 1 00:18:29.302 } 00:18:29.302 Got JSON-RPC error response 00:18:29.302 response: 00:18:29.302 { 00:18:29.302 "code": -5, 00:18:29.302 "message": "Input/output error" 00:18:29.302 } 00:18:29.302 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:29.302 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.302 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.302 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.302 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.302 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.302 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:30.239 nvme0n1 00:18:30.239 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:30.239 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:30.239 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:30.497 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:30.756 nvme0n1 00:18:30.756 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:30.756 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:30.756 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.014 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.014 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.014 19:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: '' 2s 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: ]] 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjhlZTBmY2M4MGEyY2ZiNzJkZDYyYzE3YmE5ZTE3NjKxz1gV: 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:31.273 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: 2s 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: ]] 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:N2I1MTgyMzA4NWY4NThmYWUyMTU0YTkyNWNlZjk3NjZiYmRkZDZlZDcyODkzM2IxMdmCoA==: 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:33.175 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.709 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.967 nvme0n1 00:18:36.225 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.225 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.225 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.225 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.225 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.225 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.482 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:36.482 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:36.482 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.741 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.741 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:36.741 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.741 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.741 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.741 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:36.741 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:37.000 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:37.000 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:37.000 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:37.259 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:37.518 request: 00:18:37.518 { 00:18:37.518 "name": "nvme0", 00:18:37.518 "dhchap_key": "key1", 00:18:37.518 "dhchap_ctrlr_key": "key3", 00:18:37.518 "method": "bdev_nvme_set_keys", 00:18:37.518 "req_id": 1 00:18:37.518 } 00:18:37.518 Got JSON-RPC error response 00:18:37.518 response: 00:18:37.518 { 00:18:37.518 "code": -13, 00:18:37.518 "message": "Permission denied" 00:18:37.518 } 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:37.777 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:39.152 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:39.152 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:39.152 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.152 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:39.152 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:39.152 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.152 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.152 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.152 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.152 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.152 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.719 nvme0n1 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.719 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:40.284 request: 00:18:40.284 { 00:18:40.284 "name": "nvme0", 00:18:40.284 "dhchap_key": "key2", 00:18:40.284 "dhchap_ctrlr_key": "key0", 00:18:40.284 "method": "bdev_nvme_set_keys", 00:18:40.284 "req_id": 1 00:18:40.284 } 00:18:40.284 Got JSON-RPC error response 00:18:40.284 response: 00:18:40.284 { 00:18:40.284 "code": -13, 00:18:40.284 "message": "Permission denied" 00:18:40.284 } 00:18:40.284 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:40.284 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.284 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.284 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.284 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:40.284 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:40.284 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.542 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:40.542 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:41.475 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:41.475 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:41.475 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3723333 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3723333 ']' 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3723333 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723333 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723333' 00:18:41.733 killing process with pid 3723333 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3723333 00:18:41.733 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3723333 00:18:41.992 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:41.992 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:41.992 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:41.992 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.992 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:41.992 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.992 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.992 rmmod nvme_tcp 00:18:41.992 rmmod nvme_fabrics 00:18:41.992 rmmod nvme_keyring 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3746392 ']' 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3746392 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3746392 ']' 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3746392 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3746392 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3746392' 00:18:42.250 killing process with pid 3746392 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3746392 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3746392 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.250 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.D6T /tmp/spdk.key-sha256.G8p /tmp/spdk.key-sha384.AlH /tmp/spdk.key-sha512.2D2 /tmp/spdk.key-sha512.QdT /tmp/spdk.key-sha384.Egz /tmp/spdk.key-sha256.JKB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:44.785 00:18:44.785 real 2m32.116s 00:18:44.785 user 5m50.804s 00:18:44.785 sys 0m24.036s 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.785 ************************************ 00:18:44.785 END TEST nvmf_auth_target 00:18:44.785 ************************************ 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.785 ************************************ 00:18:44.785 START TEST nvmf_bdevio_no_huge 00:18:44.785 ************************************ 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:44.785 * Looking for test storage... 00:18:44.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.785 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.786 --rc genhtml_branch_coverage=1 00:18:44.786 --rc genhtml_function_coverage=1 00:18:44.786 --rc genhtml_legend=1 00:18:44.786 --rc geninfo_all_blocks=1 00:18:44.786 --rc geninfo_unexecuted_blocks=1 00:18:44.786 00:18:44.786 ' 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.786 --rc genhtml_branch_coverage=1 00:18:44.786 --rc genhtml_function_coverage=1 00:18:44.786 --rc genhtml_legend=1 00:18:44.786 --rc geninfo_all_blocks=1 00:18:44.786 --rc geninfo_unexecuted_blocks=1 00:18:44.786 00:18:44.786 ' 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.786 --rc genhtml_branch_coverage=1 00:18:44.786 --rc genhtml_function_coverage=1 00:18:44.786 --rc genhtml_legend=1 00:18:44.786 --rc geninfo_all_blocks=1 00:18:44.786 --rc geninfo_unexecuted_blocks=1 00:18:44.786 00:18:44.786 ' 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.786 --rc genhtml_branch_coverage=1 00:18:44.786 --rc genhtml_function_coverage=1 00:18:44.786 --rc genhtml_legend=1 00:18:44.786 --rc geninfo_all_blocks=1 00:18:44.786 --rc geninfo_unexecuted_blocks=1 00:18:44.786 00:18:44.786 ' 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:44.786 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.787 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:51.356 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:51.356 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:51.356 Found net devices under 0000:86:00.0: cvl_0_0 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:51.356 Found net devices under 0000:86:00.1: cvl_0_1 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:51.356 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:51.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:18:51.357 00:18:51.357 --- 10.0.0.2 ping statistics --- 00:18:51.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.357 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:18:51.357 00:18:51.357 --- 10.0.0.1 ping statistics --- 00:18:51.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.357 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3753262 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3753262 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3753262 ']' 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.357 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.357 [2024-11-26 19:20:13.676430] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:18:51.357 [2024-11-26 19:20:13.676474] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:51.357 [2024-11-26 19:20:13.759181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.357 [2024-11-26 19:20:13.805638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.357 [2024-11-26 19:20:13.805673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.357 [2024-11-26 19:20:13.805680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.357 [2024-11-26 19:20:13.805686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.357 [2024-11-26 19:20:13.805692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.357 [2024-11-26 19:20:13.806756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:51.357 [2024-11-26 19:20:13.806890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:51.357 [2024-11-26 19:20:13.807015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.357 [2024-11-26 19:20:13.807016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.615 [2024-11-26 19:20:14.536369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.615 Malloc0 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.615 [2024-11-26 19:20:14.580687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:51.615 { 00:18:51.615 "params": { 00:18:51.615 "name": "Nvme$subsystem", 00:18:51.615 "trtype": "$TEST_TRANSPORT", 00:18:51.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.615 "adrfam": "ipv4", 00:18:51.615 "trsvcid": "$NVMF_PORT", 00:18:51.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.615 "hdgst": ${hdgst:-false}, 00:18:51.615 "ddgst": ${ddgst:-false} 00:18:51.615 }, 00:18:51.615 "method": "bdev_nvme_attach_controller" 00:18:51.615 } 00:18:51.615 EOF 00:18:51.615 )") 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:51.615 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:51.615 "params": { 00:18:51.615 "name": "Nvme1", 00:18:51.615 "trtype": "tcp", 00:18:51.615 "traddr": "10.0.0.2", 00:18:51.615 "adrfam": "ipv4", 00:18:51.615 "trsvcid": "4420", 00:18:51.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.615 "hdgst": false, 00:18:51.615 "ddgst": false 00:18:51.615 }, 00:18:51.615 "method": "bdev_nvme_attach_controller" 00:18:51.615 }' 00:18:51.615 [2024-11-26 19:20:14.630512] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:18:51.615 [2024-11-26 19:20:14.630554] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3753313 ] 00:18:51.615 [2024-11-26 19:20:14.708832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:51.899 [2024-11-26 19:20:14.757691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.899 [2024-11-26 19:20:14.757760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.899 [2024-11-26 19:20:14.757760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.899 I/O targets: 00:18:51.899 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:51.899 00:18:51.899 00:18:51.899 CUnit - A unit testing framework for C - Version 2.1-3 00:18:51.900 http://cunit.sourceforge.net/ 00:18:51.900 00:18:51.900 00:18:51.900 Suite: bdevio tests on: Nvme1n1 00:18:51.900 Test: blockdev write read block ...passed 00:18:52.215 Test: blockdev write zeroes read block ...passed 00:18:52.215 Test: blockdev write zeroes read no split ...passed 00:18:52.215 Test: blockdev write zeroes read split ...passed 00:18:52.215 Test: blockdev write zeroes read split partial ...passed 00:18:52.215 Test: blockdev reset ...[2024-11-26 19:20:15.043278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:52.215 [2024-11-26 19:20:15.043342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6b8e0 (9): Bad file descriptor 00:18:52.215 [2024-11-26 19:20:15.098491] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:52.215 passed 00:18:52.215 Test: blockdev write read 8 blocks ...passed 00:18:52.215 Test: blockdev write read size > 128k ...passed 00:18:52.215 Test: blockdev write read invalid size ...passed 00:18:52.215 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:52.215 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:52.215 Test: blockdev write read max offset ...passed 00:18:52.215 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:52.215 Test: blockdev writev readv 8 blocks ...passed 00:18:52.215 Test: blockdev writev readv 30 x 1block ...passed 00:18:52.215 Test: blockdev writev readv block ...passed 00:18:52.498 Test: blockdev writev readv size > 128k ...passed 00:18:52.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:52.498 Test: blockdev comparev and writev ...[2024-11-26 19:20:15.348522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.498 [2024-11-26 19:20:15.348550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.348564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.498 [2024-11-26 19:20:15.348571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.348817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.498 [2024-11-26 19:20:15.348831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.348843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.498 [2024-11-26 19:20:15.348850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.349086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.498 [2024-11-26 19:20:15.349102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.349114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.498 [2024-11-26 19:20:15.349121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.349372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.498 [2024-11-26 19:20:15.349382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.349394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.498 [2024-11-26 19:20:15.349402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:52.498 passed 00:18:52.498 Test: blockdev nvme passthru rw ...passed 00:18:52.498 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:20:15.432039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.498 [2024-11-26 19:20:15.432056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.432158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.498 [2024-11-26 19:20:15.432167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.432264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.498 [2024-11-26 19:20:15.432273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:52.498 [2024-11-26 19:20:15.432373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.498 [2024-11-26 19:20:15.432382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:52.498 passed 00:18:52.498 Test: blockdev nvme admin passthru ...passed 00:18:52.498 Test: blockdev copy ...passed 00:18:52.498 00:18:52.498 Run Summary: Type Total Ran Passed Failed Inactive 00:18:52.498 suites 1 1 n/a 0 0 00:18:52.498 tests 23 23 23 0 0 00:18:52.498 asserts 152 152 152 0 n/a 00:18:52.498 00:18:52.498 Elapsed time = 1.142 seconds 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:52.800 rmmod nvme_tcp 00:18:52.800 rmmod nvme_fabrics 00:18:52.800 rmmod nvme_keyring 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3753262 ']' 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3753262 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3753262 ']' 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3753262 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3753262 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3753262' 00:18:52.800 killing process with pid 3753262 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3753262 00:18:52.800 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3753262 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.097 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:55.632 00:18:55.632 real 0m10.777s 00:18:55.632 user 0m13.099s 00:18:55.632 sys 0m5.344s 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.632 ************************************ 00:18:55.632 END TEST nvmf_bdevio_no_huge 00:18:55.632 ************************************ 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.632 ************************************ 00:18:55.632 START TEST nvmf_tls 00:18:55.632 ************************************ 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:55.632 * Looking for test storage... 00:18:55.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:55.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.632 --rc genhtml_branch_coverage=1 00:18:55.632 --rc genhtml_function_coverage=1 00:18:55.632 --rc genhtml_legend=1 00:18:55.632 --rc geninfo_all_blocks=1 00:18:55.632 --rc geninfo_unexecuted_blocks=1 00:18:55.632 00:18:55.632 ' 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:55.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.632 --rc genhtml_branch_coverage=1 00:18:55.632 --rc genhtml_function_coverage=1 00:18:55.632 --rc genhtml_legend=1 00:18:55.632 --rc geninfo_all_blocks=1 00:18:55.632 --rc geninfo_unexecuted_blocks=1 00:18:55.632 00:18:55.632 ' 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:55.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.632 --rc genhtml_branch_coverage=1 00:18:55.632 --rc genhtml_function_coverage=1 00:18:55.632 --rc genhtml_legend=1 00:18:55.632 --rc geninfo_all_blocks=1 00:18:55.632 --rc geninfo_unexecuted_blocks=1 00:18:55.632 00:18:55.632 ' 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:55.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.632 --rc genhtml_branch_coverage=1 00:18:55.632 --rc genhtml_function_coverage=1 00:18:55.632 --rc genhtml_legend=1 00:18:55.632 --rc geninfo_all_blocks=1 00:18:55.632 --rc geninfo_unexecuted_blocks=1 00:18:55.632 00:18:55.632 ' 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:55.632 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.633 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:02.202 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:02.202 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.202 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:02.203 Found net devices under 0000:86:00.0: cvl_0_0 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:02.203 Found net devices under 0000:86:00.1: cvl_0_1 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:02.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:19:02.203 00:19:02.203 --- 10.0.0.2 ping statistics --- 00:19:02.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.203 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:19:02.203 00:19:02.203 --- 10.0.0.1 ping statistics --- 00:19:02.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.203 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3757066 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3757066 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3757066 ']' 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.203 [2024-11-26 19:20:24.545796] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:02.203 [2024-11-26 19:20:24.545851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.203 [2024-11-26 19:20:24.628582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.203 [2024-11-26 19:20:24.669022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.203 [2024-11-26 19:20:24.669056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.203 [2024-11-26 19:20:24.669063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.203 [2024-11-26 19:20:24.669069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.203 [2024-11-26 19:20:24.669074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.203 [2024-11-26 19:20:24.669628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:02.203 true 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.203 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:02.203 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:02.203 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:02.203 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:02.204 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.204 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:02.462 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:02.462 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:02.462 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:02.720 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:02.720 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.979 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:02.979 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:02.979 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:02.979 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.979 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:02.979 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:02.979 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:03.237 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.237 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:03.496 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:03.496 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:03.496 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:03.496 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.496 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.aaiUOCGjhN 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.nL6JRdn6hj 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aaiUOCGjhN 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.nL6JRdn6hj 00:19:03.755 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:04.014 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:04.272 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.aaiUOCGjhN 00:19:04.272 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aaiUOCGjhN 00:19:04.272 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:04.531 [2024-11-26 19:20:27.471541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.532 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:04.790 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:04.790 [2024-11-26 19:20:27.840474] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.790 [2024-11-26 19:20:27.840701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.790 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.049 malloc0 00:19:05.049 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.307 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aaiUOCGjhN 00:19:05.307 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:05.565 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aaiUOCGjhN 00:19:17.768 Initializing NVMe Controllers 00:19:17.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:17.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:17.768 Initialization complete. Launching workers. 00:19:17.768 ======================================================== 00:19:17.768 Latency(us) 00:19:17.768 Device Information : IOPS MiB/s Average min max 00:19:17.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16651.37 65.04 3843.66 851.07 5520.62 00:19:17.768 ======================================================== 00:19:17.768 Total : 16651.37 65.04 3843.66 851.07 5520.62 00:19:17.768 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aaiUOCGjhN 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aaiUOCGjhN 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3759440 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3759440 /var/tmp/bdevperf.sock 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3759440 ']' 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.768 [2024-11-26 19:20:38.743829] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:17.768 [2024-11-26 19:20:38.743879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3759440 ] 00:19:17.768 [2024-11-26 19:20:38.818769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.768 [2024-11-26 19:20:38.860503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:17.768 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aaiUOCGjhN 00:19:17.768 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:17.768 [2024-11-26 19:20:39.300958] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.768 TLSTESTn1 00:19:17.768 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:17.768 Running I/O for 10 seconds... 00:19:18.704 5496.00 IOPS, 21.47 MiB/s [2024-11-26T18:20:42.752Z] 5495.50 IOPS, 21.47 MiB/s [2024-11-26T18:20:43.687Z] 5565.67 IOPS, 21.74 MiB/s [2024-11-26T18:20:44.623Z] 5553.00 IOPS, 21.69 MiB/s [2024-11-26T18:20:45.567Z] 5558.60 IOPS, 21.71 MiB/s [2024-11-26T18:20:46.502Z] 5577.67 IOPS, 21.79 MiB/s [2024-11-26T18:20:47.877Z] 5557.14 IOPS, 21.71 MiB/s [2024-11-26T18:20:48.812Z] 5561.88 IOPS, 21.73 MiB/s [2024-11-26T18:20:49.745Z] 5565.44 IOPS, 21.74 MiB/s [2024-11-26T18:20:49.745Z] 5564.80 IOPS, 21.74 MiB/s 00:19:26.631 Latency(us) 00:19:26.631 [2024-11-26T18:20:49.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.631 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.631 Verification LBA range: start 0x0 length 0x2000 00:19:26.631 TLSTESTn1 : 10.02 5568.12 21.75 0.00 0.00 22950.56 6210.32 21845.33 00:19:26.631 [2024-11-26T18:20:49.745Z] =================================================================================================================== 00:19:26.631 [2024-11-26T18:20:49.745Z] Total : 5568.12 21.75 0.00 0.00 22950.56 6210.32 21845.33 00:19:26.631 { 00:19:26.631 "results": [ 00:19:26.631 { 00:19:26.631 "job": "TLSTESTn1", 00:19:26.631 "core_mask": "0x4", 00:19:26.631 "workload": "verify", 00:19:26.631 "status": "finished", 00:19:26.631 "verify_range": { 00:19:26.631 "start": 0, 00:19:26.631 "length": 8192 00:19:26.631 }, 00:19:26.631 "queue_depth": 128, 00:19:26.631 "io_size": 4096, 00:19:26.631 "runtime": 10.016843, 00:19:26.631 "iops": 5568.121612767615, 00:19:26.631 "mibps": 21.750475049873497, 00:19:26.631 "io_failed": 0, 00:19:26.631 "io_timeout": 0, 00:19:26.631 "avg_latency_us": 22950.559205105546, 00:19:26.631 "min_latency_us": 6210.31619047619, 00:19:26.631 "max_latency_us": 21845.333333333332 00:19:26.631 } 00:19:26.631 ], 00:19:26.631 "core_count": 1 00:19:26.631 } 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3759440 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3759440 ']' 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3759440 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3759440 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3759440' 00:19:26.631 killing process with pid 3759440 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3759440 00:19:26.631 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.631 00:19:26.631 Latency(us) 00:19:26.631 [2024-11-26T18:20:49.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.631 [2024-11-26T18:20:49.745Z] =================================================================================================================== 00:19:26.631 [2024-11-26T18:20:49.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.631 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3759440 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nL6JRdn6hj 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nL6JRdn6hj 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nL6JRdn6hj 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nL6JRdn6hj 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3761245 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3761245 /var/tmp/bdevperf.sock 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3761245 ']' 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.890 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.890 [2024-11-26 19:20:49.803109] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:26.890 [2024-11-26 19:20:49.803153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761245 ] 00:19:26.890 [2024-11-26 19:20:49.873132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.890 [2024-11-26 19:20:49.910881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.148 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.148 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:27.148 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nL6JRdn6hj 00:19:27.148 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.406 [2024-11-26 19:20:50.414776] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.406 [2024-11-26 19:20:50.420866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.406 [2024-11-26 19:20:50.421216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16961a0 (107): Transport endpoint is not connected 00:19:27.406 [2024-11-26 19:20:50.422210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16961a0 (9): Bad file descriptor 00:19:27.406 [2024-11-26 19:20:50.423212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:27.406 [2024-11-26 19:20:50.423227] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.406 [2024-11-26 19:20:50.423235] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:27.406 [2024-11-26 19:20:50.423247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:27.406 request: 00:19:27.406 { 00:19:27.406 "name": "TLSTEST", 00:19:27.406 "trtype": "tcp", 00:19:27.406 "traddr": "10.0.0.2", 00:19:27.406 "adrfam": "ipv4", 00:19:27.406 "trsvcid": "4420", 00:19:27.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.406 "prchk_reftag": false, 00:19:27.406 "prchk_guard": false, 00:19:27.406 "hdgst": false, 00:19:27.406 "ddgst": false, 00:19:27.406 "psk": "key0", 00:19:27.406 "allow_unrecognized_csi": false, 00:19:27.406 "method": "bdev_nvme_attach_controller", 00:19:27.406 "req_id": 1 00:19:27.406 } 00:19:27.406 Got JSON-RPC error response 00:19:27.406 response: 00:19:27.406 { 00:19:27.406 "code": -5, 00:19:27.406 "message": "Input/output error" 00:19:27.406 } 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3761245 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3761245 ']' 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3761245 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3761245 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3761245' 00:19:27.406 killing process with pid 3761245 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3761245 00:19:27.406 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.406 00:19:27.406 Latency(us) 00:19:27.406 [2024-11-26T18:20:50.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.406 [2024-11-26T18:20:50.520Z] =================================================================================================================== 00:19:27.406 [2024-11-26T18:20:50.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.406 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3761245 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aaiUOCGjhN 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aaiUOCGjhN 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aaiUOCGjhN 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aaiUOCGjhN 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3761480 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3761480 /var/tmp/bdevperf.sock 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3761480 ']' 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.665 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.666 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.666 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.666 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.666 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.666 [2024-11-26 19:20:50.702245] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:27.666 [2024-11-26 19:20:50.702294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761480 ] 00:19:27.666 [2024-11-26 19:20:50.768793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.925 [2024-11-26 19:20:50.805894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.925 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.925 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:27.925 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aaiUOCGjhN 00:19:28.183 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:28.183 [2024-11-26 19:20:51.257193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.183 [2024-11-26 19:20:51.262898] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:28.183 [2024-11-26 19:20:51.262919] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:28.183 [2024-11-26 19:20:51.262940] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:28.183 [2024-11-26 19:20:51.263549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149f1a0 (107): Transport endpoint is not connected 00:19:28.183 [2024-11-26 19:20:51.264543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149f1a0 (9): Bad file descriptor 00:19:28.183 [2024-11-26 19:20:51.265545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:28.183 [2024-11-26 19:20:51.265560] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:28.183 [2024-11-26 19:20:51.265567] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:28.183 [2024-11-26 19:20:51.265575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:28.183 request: 00:19:28.183 { 00:19:28.183 "name": "TLSTEST", 00:19:28.183 "trtype": "tcp", 00:19:28.183 "traddr": "10.0.0.2", 00:19:28.183 "adrfam": "ipv4", 00:19:28.183 "trsvcid": "4420", 00:19:28.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:28.183 "prchk_reftag": false, 00:19:28.183 "prchk_guard": false, 00:19:28.183 "hdgst": false, 00:19:28.183 "ddgst": false, 00:19:28.183 "psk": "key0", 00:19:28.183 "allow_unrecognized_csi": false, 00:19:28.183 "method": "bdev_nvme_attach_controller", 00:19:28.183 "req_id": 1 00:19:28.183 } 00:19:28.183 Got JSON-RPC error response 00:19:28.183 response: 00:19:28.183 { 00:19:28.183 "code": -5, 00:19:28.183 "message": "Input/output error" 00:19:28.183 } 00:19:28.183 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3761480 00:19:28.183 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3761480 ']' 00:19:28.183 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3761480 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3761480 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3761480' 00:19:28.442 killing process with pid 3761480 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3761480 00:19:28.442 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.442 00:19:28.442 Latency(us) 00:19:28.442 [2024-11-26T18:20:51.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.442 [2024-11-26T18:20:51.556Z] =================================================================================================================== 00:19:28.442 [2024-11-26T18:20:51.556Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3761480 00:19:28.442 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aaiUOCGjhN 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aaiUOCGjhN 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aaiUOCGjhN 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aaiUOCGjhN 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3761538 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3761538 /var/tmp/bdevperf.sock 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3761538 ']' 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.443 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.443 [2024-11-26 19:20:51.550554] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:28.443 [2024-11-26 19:20:51.550601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761538 ] 00:19:28.701 [2024-11-26 19:20:51.627606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.702 [2024-11-26 19:20:51.669195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.702 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.702 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.702 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aaiUOCGjhN 00:19:28.959 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.218 [2024-11-26 19:20:52.137861] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.218 [2024-11-26 19:20:52.145139] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:29.218 [2024-11-26 19:20:52.145160] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:29.218 [2024-11-26 19:20:52.145182] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:29.218 [2024-11-26 19:20:52.145211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7c1a0 (107): Transport endpoint is not connected 00:19:29.218 [2024-11-26 19:20:52.146205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7c1a0 (9): Bad file descriptor 00:19:29.218 [2024-11-26 19:20:52.147206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:29.218 [2024-11-26 19:20:52.147217] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:29.218 [2024-11-26 19:20:52.147224] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:29.218 [2024-11-26 19:20:52.147233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:29.218 request: 00:19:29.218 { 00:19:29.218 "name": "TLSTEST", 00:19:29.218 "trtype": "tcp", 00:19:29.218 "traddr": "10.0.0.2", 00:19:29.218 "adrfam": "ipv4", 00:19:29.218 "trsvcid": "4420", 00:19:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:29.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.218 "prchk_reftag": false, 00:19:29.218 "prchk_guard": false, 00:19:29.218 "hdgst": false, 00:19:29.218 "ddgst": false, 00:19:29.218 "psk": "key0", 00:19:29.218 "allow_unrecognized_csi": false, 00:19:29.218 "method": "bdev_nvme_attach_controller", 00:19:29.218 "req_id": 1 00:19:29.218 } 00:19:29.218 Got JSON-RPC error response 00:19:29.218 response: 00:19:29.218 { 00:19:29.218 "code": -5, 00:19:29.218 "message": "Input/output error" 00:19:29.218 } 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3761538 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3761538 ']' 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3761538 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3761538 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3761538' 00:19:29.218 killing process with pid 3761538 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3761538 00:19:29.218 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.218 00:19:29.218 Latency(us) 00:19:29.218 [2024-11-26T18:20:52.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.218 [2024-11-26T18:20:52.332Z] =================================================================================================================== 00:19:29.218 [2024-11-26T18:20:52.332Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.218 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3761538 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3761734 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3761734 /var/tmp/bdevperf.sock 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3761734 ']' 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.477 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.477 [2024-11-26 19:20:52.422340] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:29.477 [2024-11-26 19:20:52.422386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761734 ] 00:19:29.477 [2024-11-26 19:20:52.486078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.477 [2024-11-26 19:20:52.522402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.735 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.735 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.735 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:29.735 [2024-11-26 19:20:52.784829] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:29.735 [2024-11-26 19:20:52.784863] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:29.735 request: 00:19:29.735 { 00:19:29.735 "name": "key0", 00:19:29.735 "path": "", 00:19:29.735 "method": "keyring_file_add_key", 00:19:29.735 "req_id": 1 00:19:29.735 } 00:19:29.735 Got JSON-RPC error response 00:19:29.735 response: 00:19:29.735 { 00:19:29.735 "code": -1, 00:19:29.735 "message": "Operation not permitted" 00:19:29.735 } 00:19:29.735 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.994 [2024-11-26 19:20:52.985438] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.994 [2024-11-26 19:20:52.985465] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:29.994 request: 00:19:29.994 { 00:19:29.994 "name": "TLSTEST", 00:19:29.994 "trtype": "tcp", 00:19:29.994 "traddr": "10.0.0.2", 00:19:29.994 "adrfam": "ipv4", 00:19:29.994 "trsvcid": "4420", 00:19:29.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.994 "prchk_reftag": false, 00:19:29.994 "prchk_guard": false, 00:19:29.994 "hdgst": false, 00:19:29.994 "ddgst": false, 00:19:29.994 "psk": "key0", 00:19:29.994 "allow_unrecognized_csi": false, 00:19:29.994 "method": "bdev_nvme_attach_controller", 00:19:29.994 "req_id": 1 00:19:29.994 } 00:19:29.994 Got JSON-RPC error response 00:19:29.994 response: 00:19:29.994 { 00:19:29.994 "code": -126, 00:19:29.994 "message": "Required key not available" 00:19:29.994 } 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3761734 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3761734 ']' 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3761734 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3761734 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3761734' 00:19:29.994 killing process with pid 3761734 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3761734 00:19:29.994 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.994 00:19:29.994 Latency(us) 00:19:29.994 [2024-11-26T18:20:53.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.994 [2024-11-26T18:20:53.108Z] =================================================================================================================== 00:19:29.994 [2024-11-26T18:20:53.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.994 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3761734 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3757066 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3757066 ']' 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3757066 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757066 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757066' 00:19:30.256 killing process with pid 3757066 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3757066 00:19:30.256 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3757066 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ASJZDZz3B0 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ASJZDZz3B0 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3761975 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3761975 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3761975 ']' 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.516 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.517 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.517 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.517 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.517 [2024-11-26 19:20:53.538555] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:30.517 [2024-11-26 19:20:53.538598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.517 [2024-11-26 19:20:53.613646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.776 [2024-11-26 19:20:53.650451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.776 [2024-11-26 19:20:53.650484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.776 [2024-11-26 19:20:53.650491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.776 [2024-11-26 19:20:53.650496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.776 [2024-11-26 19:20:53.650501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.776 [2024-11-26 19:20:53.651072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ASJZDZz3B0 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ASJZDZz3B0 00:19:30.776 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:31.035 [2024-11-26 19:20:53.955012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.035 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:31.035 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:31.294 [2024-11-26 19:20:54.307940] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.294 [2024-11-26 19:20:54.308187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.294 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:31.552 malloc0 00:19:31.552 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:31.821 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:19:31.821 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ASJZDZz3B0 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ASJZDZz3B0 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3762238 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3762238 /var/tmp/bdevperf.sock 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3762238 ']' 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.079 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.079 [2024-11-26 19:20:55.113987] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:32.079 [2024-11-26 19:20:55.114033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762238 ] 00:19:32.079 [2024-11-26 19:20:55.188408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.338 [2024-11-26 19:20:55.228039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.338 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.338 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.338 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:19:32.596 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.596 [2024-11-26 19:20:55.691161] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.855 TLSTESTn1 00:19:32.855 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:32.855 Running I/O for 10 seconds... 00:19:35.168 5448.00 IOPS, 21.28 MiB/s [2024-11-26T18:20:59.217Z] 5493.00 IOPS, 21.46 MiB/s [2024-11-26T18:21:00.152Z] 5531.00 IOPS, 21.61 MiB/s [2024-11-26T18:21:01.087Z] 5572.50 IOPS, 21.77 MiB/s [2024-11-26T18:21:02.025Z] 5542.80 IOPS, 21.65 MiB/s [2024-11-26T18:21:02.960Z] 5573.00 IOPS, 21.77 MiB/s [2024-11-26T18:21:03.896Z] 5581.71 IOPS, 21.80 MiB/s [2024-11-26T18:21:05.269Z] 5592.88 IOPS, 21.85 MiB/s [2024-11-26T18:21:06.204Z] 5593.11 IOPS, 21.85 MiB/s [2024-11-26T18:21:06.204Z] 5595.90 IOPS, 21.86 MiB/s 00:19:43.090 Latency(us) 00:19:43.090 [2024-11-26T18:21:06.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.090 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:43.090 Verification LBA range: start 0x0 length 0x2000 00:19:43.090 TLSTESTn1 : 10.02 5599.41 21.87 0.00 0.00 22823.69 4712.35 25715.08 00:19:43.090 [2024-11-26T18:21:06.204Z] =================================================================================================================== 00:19:43.090 [2024-11-26T18:21:06.204Z] Total : 5599.41 21.87 0.00 0.00 22823.69 4712.35 25715.08 00:19:43.090 { 00:19:43.090 "results": [ 00:19:43.090 { 00:19:43.090 "job": "TLSTESTn1", 00:19:43.090 "core_mask": "0x4", 00:19:43.090 "workload": "verify", 00:19:43.090 "status": "finished", 00:19:43.090 "verify_range": { 00:19:43.090 "start": 0, 00:19:43.090 "length": 8192 00:19:43.090 }, 00:19:43.090 "queue_depth": 128, 00:19:43.090 "io_size": 4096, 00:19:43.090 "runtime": 10.016234, 00:19:43.090 "iops": 5599.409917939218, 00:19:43.090 "mibps": 21.87269499195007, 00:19:43.090 "io_failed": 0, 00:19:43.090 "io_timeout": 0, 00:19:43.090 "avg_latency_us": 22823.694277104907, 00:19:43.090 "min_latency_us": 4712.350476190476, 00:19:43.090 "max_latency_us": 25715.078095238096 00:19:43.090 } 00:19:43.090 ], 00:19:43.090 "core_count": 1 00:19:43.090 } 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3762238 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3762238 ']' 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3762238 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3762238 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3762238' 00:19:43.090 killing process with pid 3762238 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3762238 00:19:43.090 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.090 00:19:43.090 Latency(us) 00:19:43.090 [2024-11-26T18:21:06.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.090 [2024-11-26T18:21:06.204Z] =================================================================================================================== 00:19:43.090 [2024-11-26T18:21:06.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.090 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3762238 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ASJZDZz3B0 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ASJZDZz3B0 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ASJZDZz3B0 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ASJZDZz3B0 00:19:43.090 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ASJZDZz3B0 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3764189 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3764189 /var/tmp/bdevperf.sock 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3764189 ']' 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.091 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.091 [2024-11-26 19:21:06.185585] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:43.091 [2024-11-26 19:21:06.185633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764189 ] 00:19:43.349 [2024-11-26 19:21:06.259013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.349 [2024-11-26 19:21:06.298292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.349 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.349 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:43.349 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:19:43.721 [2024-11-26 19:21:06.557529] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ASJZDZz3B0': 0100666 00:19:43.721 [2024-11-26 19:21:06.557557] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:43.721 request: 00:19:43.721 { 00:19:43.721 "name": "key0", 00:19:43.721 "path": "/tmp/tmp.ASJZDZz3B0", 00:19:43.721 "method": "keyring_file_add_key", 00:19:43.721 "req_id": 1 00:19:43.721 } 00:19:43.721 Got JSON-RPC error response 00:19:43.721 response: 00:19:43.721 { 00:19:43.721 "code": -1, 00:19:43.721 "message": "Operation not permitted" 00:19:43.721 } 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.721 [2024-11-26 19:21:06.734068] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.721 [2024-11-26 19:21:06.734101] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:43.721 request: 00:19:43.721 { 00:19:43.721 "name": "TLSTEST", 00:19:43.721 "trtype": "tcp", 00:19:43.721 "traddr": "10.0.0.2", 00:19:43.721 "adrfam": "ipv4", 00:19:43.721 "trsvcid": "4420", 00:19:43.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.721 "prchk_reftag": false, 00:19:43.721 "prchk_guard": false, 00:19:43.721 "hdgst": false, 00:19:43.721 "ddgst": false, 00:19:43.721 "psk": "key0", 00:19:43.721 "allow_unrecognized_csi": false, 00:19:43.721 "method": "bdev_nvme_attach_controller", 00:19:43.721 "req_id": 1 00:19:43.721 } 00:19:43.721 Got JSON-RPC error response 00:19:43.721 response: 00:19:43.721 { 00:19:43.721 "code": -126, 00:19:43.721 "message": "Required key not available" 00:19:43.721 } 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3764189 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3764189 ']' 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3764189 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3764189 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3764189' 00:19:43.721 killing process with pid 3764189 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3764189 00:19:43.721 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.721 00:19:43.721 Latency(us) 00:19:43.721 [2024-11-26T18:21:06.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.721 [2024-11-26T18:21:06.835Z] =================================================================================================================== 00:19:43.721 [2024-11-26T18:21:06.835Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.721 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3764189 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3761975 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3761975 ']' 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3761975 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.020 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3761975 00:19:44.020 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:44.020 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:44.020 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3761975' 00:19:44.020 killing process with pid 3761975 00:19:44.020 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3761975 00:19:44.020 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3761975 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3764275 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3764275 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3764275 ']' 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.310 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.310 [2024-11-26 19:21:07.227247] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:44.310 [2024-11-26 19:21:07.227293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.310 [2024-11-26 19:21:07.304471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.310 [2024-11-26 19:21:07.341732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.310 [2024-11-26 19:21:07.341770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.310 [2024-11-26 19:21:07.341777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.310 [2024-11-26 19:21:07.341786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.310 [2024-11-26 19:21:07.341791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.310 [2024-11-26 19:21:07.342319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ASJZDZz3B0 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ASJZDZz3B0 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.ASJZDZz3B0 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ASJZDZz3B0 00:19:44.569 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.569 [2024-11-26 19:21:07.654403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.827 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.827 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:45.085 [2024-11-26 19:21:08.047410] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.085 [2024-11-26 19:21:08.047615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.343 malloc0 00:19:45.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.601 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:19:45.601 [2024-11-26 19:21:08.644881] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ASJZDZz3B0': 0100666 00:19:45.601 [2024-11-26 19:21:08.644915] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:45.601 request: 00:19:45.601 { 00:19:45.601 "name": "key0", 00:19:45.601 "path": "/tmp/tmp.ASJZDZz3B0", 00:19:45.601 "method": "keyring_file_add_key", 00:19:45.601 "req_id": 1 00:19:45.601 } 00:19:45.601 Got JSON-RPC error response 00:19:45.601 response: 00:19:45.601 { 00:19:45.601 "code": -1, 00:19:45.601 "message": "Operation not permitted" 00:19:45.601 } 00:19:45.601 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.859 [2024-11-26 19:21:08.845431] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:45.859 [2024-11-26 19:21:08.845468] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:45.859 request: 00:19:45.859 { 00:19:45.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.859 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.859 "psk": "key0", 00:19:45.859 "method": "nvmf_subsystem_add_host", 00:19:45.859 "req_id": 1 00:19:45.859 } 00:19:45.859 Got JSON-RPC error response 00:19:45.859 response: 00:19:45.859 { 00:19:45.859 "code": -32603, 00:19:45.859 "message": "Internal error" 00:19:45.859 } 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3764275 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3764275 ']' 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3764275 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3764275 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3764275' 00:19:45.859 killing process with pid 3764275 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3764275 00:19:45.859 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3764275 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ASJZDZz3B0 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3764965 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3764965 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3764965 ']' 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.118 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.118 [2024-11-26 19:21:09.161180] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:46.118 [2024-11-26 19:21:09.161230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.376 [2024-11-26 19:21:09.243590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.376 [2024-11-26 19:21:09.285040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.376 [2024-11-26 19:21:09.285074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.376 [2024-11-26 19:21:09.285081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.376 [2024-11-26 19:21:09.285088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.376 [2024-11-26 19:21:09.285093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.376 [2024-11-26 19:21:09.285648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ASJZDZz3B0 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ASJZDZz3B0 00:19:46.941 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:47.199 [2024-11-26 19:21:10.216573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.199 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.456 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:47.714 [2024-11-26 19:21:10.621609] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.714 [2024-11-26 19:21:10.621823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.714 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:47.714 malloc0 00:19:47.972 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.972 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:19:48.231 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3765432 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3765432 /var/tmp/bdevperf.sock 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3765432 ']' 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.491 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.491 [2024-11-26 19:21:11.475274] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:48.491 [2024-11-26 19:21:11.475323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765432 ] 00:19:48.491 [2024-11-26 19:21:11.551067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.491 [2024-11-26 19:21:11.592183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.749 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.749 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.749 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:19:49.007 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.008 [2024-11-26 19:21:12.056910] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.267 TLSTESTn1 00:19:49.267 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:49.526 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:49.526 "subsystems": [ 00:19:49.526 { 00:19:49.526 "subsystem": "keyring", 00:19:49.526 "config": [ 00:19:49.526 { 00:19:49.526 "method": "keyring_file_add_key", 00:19:49.526 "params": { 00:19:49.526 "name": "key0", 00:19:49.526 "path": "/tmp/tmp.ASJZDZz3B0" 00:19:49.526 } 00:19:49.526 } 00:19:49.526 ] 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "subsystem": "iobuf", 00:19:49.526 "config": [ 00:19:49.526 { 00:19:49.526 "method": "iobuf_set_options", 00:19:49.526 "params": { 00:19:49.526 "small_pool_count": 8192, 00:19:49.526 "large_pool_count": 1024, 00:19:49.526 "small_bufsize": 8192, 00:19:49.526 "large_bufsize": 135168, 00:19:49.526 "enable_numa": false 00:19:49.526 } 00:19:49.526 } 00:19:49.526 ] 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "subsystem": "sock", 00:19:49.526 "config": [ 00:19:49.526 { 00:19:49.526 "method": "sock_set_default_impl", 00:19:49.526 "params": { 00:19:49.526 "impl_name": "posix" 00:19:49.526 } 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "method": "sock_impl_set_options", 00:19:49.526 "params": { 00:19:49.526 "impl_name": "ssl", 00:19:49.526 "recv_buf_size": 4096, 00:19:49.526 "send_buf_size": 4096, 00:19:49.526 "enable_recv_pipe": true, 00:19:49.526 "enable_quickack": false, 00:19:49.526 "enable_placement_id": 0, 00:19:49.526 "enable_zerocopy_send_server": true, 00:19:49.526 "enable_zerocopy_send_client": false, 00:19:49.526 "zerocopy_threshold": 0, 00:19:49.526 "tls_version": 0, 00:19:49.526 "enable_ktls": false 00:19:49.526 } 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "method": "sock_impl_set_options", 00:19:49.526 "params": { 00:19:49.526 "impl_name": "posix", 00:19:49.526 "recv_buf_size": 2097152, 00:19:49.526 "send_buf_size": 2097152, 00:19:49.526 "enable_recv_pipe": true, 00:19:49.526 "enable_quickack": false, 00:19:49.526 "enable_placement_id": 0, 00:19:49.526 "enable_zerocopy_send_server": true, 00:19:49.526 "enable_zerocopy_send_client": false, 00:19:49.526 "zerocopy_threshold": 0, 00:19:49.526 "tls_version": 0, 00:19:49.526 "enable_ktls": false 00:19:49.526 } 00:19:49.526 } 00:19:49.526 ] 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "subsystem": "vmd", 00:19:49.526 "config": [] 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "subsystem": "accel", 00:19:49.526 "config": [ 00:19:49.526 { 00:19:49.526 "method": "accel_set_options", 00:19:49.526 "params": { 00:19:49.526 "small_cache_size": 128, 00:19:49.526 "large_cache_size": 16, 00:19:49.526 "task_count": 2048, 00:19:49.526 "sequence_count": 2048, 00:19:49.526 "buf_count": 2048 00:19:49.526 } 00:19:49.526 } 00:19:49.526 ] 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "subsystem": "bdev", 00:19:49.526 "config": [ 00:19:49.526 { 00:19:49.526 "method": "bdev_set_options", 00:19:49.526 "params": { 00:19:49.526 "bdev_io_pool_size": 65535, 00:19:49.526 "bdev_io_cache_size": 256, 00:19:49.526 "bdev_auto_examine": true, 00:19:49.526 "iobuf_small_cache_size": 128, 00:19:49.526 "iobuf_large_cache_size": 16 00:19:49.526 } 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "method": "bdev_raid_set_options", 00:19:49.526 "params": { 00:19:49.526 "process_window_size_kb": 1024, 00:19:49.526 "process_max_bandwidth_mb_sec": 0 00:19:49.526 } 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "method": "bdev_iscsi_set_options", 00:19:49.526 "params": { 00:19:49.526 "timeout_sec": 30 00:19:49.526 } 00:19:49.526 }, 00:19:49.526 { 00:19:49.526 "method": "bdev_nvme_set_options", 00:19:49.526 "params": { 00:19:49.526 "action_on_timeout": "none", 00:19:49.526 "timeout_us": 0, 00:19:49.526 "timeout_admin_us": 0, 00:19:49.526 "keep_alive_timeout_ms": 10000, 00:19:49.526 "arbitration_burst": 0, 00:19:49.526 "low_priority_weight": 0, 00:19:49.526 "medium_priority_weight": 0, 00:19:49.526 "high_priority_weight": 0, 00:19:49.526 "nvme_adminq_poll_period_us": 10000, 00:19:49.526 "nvme_ioq_poll_period_us": 0, 00:19:49.526 "io_queue_requests": 0, 00:19:49.526 "delay_cmd_submit": true, 00:19:49.526 "transport_retry_count": 4, 00:19:49.526 "bdev_retry_count": 3, 00:19:49.526 "transport_ack_timeout": 0, 00:19:49.526 "ctrlr_loss_timeout_sec": 0, 00:19:49.526 "reconnect_delay_sec": 0, 00:19:49.526 "fast_io_fail_timeout_sec": 0, 00:19:49.526 "disable_auto_failback": false, 00:19:49.526 "generate_uuids": false, 00:19:49.526 "transport_tos": 0, 00:19:49.526 "nvme_error_stat": false, 00:19:49.526 "rdma_srq_size": 0, 00:19:49.526 "io_path_stat": false, 00:19:49.526 "allow_accel_sequence": false, 00:19:49.526 "rdma_max_cq_size": 0, 00:19:49.526 "rdma_cm_event_timeout_ms": 0, 00:19:49.526 "dhchap_digests": [ 00:19:49.526 "sha256", 00:19:49.526 "sha384", 00:19:49.527 "sha512" 00:19:49.527 ], 00:19:49.527 "dhchap_dhgroups": [ 00:19:49.527 "null", 00:19:49.527 "ffdhe2048", 00:19:49.527 "ffdhe3072", 00:19:49.527 "ffdhe4096", 00:19:49.527 "ffdhe6144", 00:19:49.527 "ffdhe8192" 00:19:49.527 ] 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "bdev_nvme_set_hotplug", 00:19:49.527 "params": { 00:19:49.527 "period_us": 100000, 00:19:49.527 "enable": false 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "bdev_malloc_create", 00:19:49.527 "params": { 00:19:49.527 "name": "malloc0", 00:19:49.527 "num_blocks": 8192, 00:19:49.527 "block_size": 4096, 00:19:49.527 "physical_block_size": 4096, 00:19:49.527 "uuid": "05574cce-f7c9-4e3b-9360-14fbe796b5cf", 00:19:49.527 "optimal_io_boundary": 0, 00:19:49.527 "md_size": 0, 00:19:49.527 "dif_type": 0, 00:19:49.527 "dif_is_head_of_md": false, 00:19:49.527 "dif_pi_format": 0 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "bdev_wait_for_examine" 00:19:49.527 } 00:19:49.527 ] 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "subsystem": "nbd", 00:19:49.527 "config": [] 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "subsystem": "scheduler", 00:19:49.527 "config": [ 00:19:49.527 { 00:19:49.527 "method": "framework_set_scheduler", 00:19:49.527 "params": { 00:19:49.527 "name": "static" 00:19:49.527 } 00:19:49.527 } 00:19:49.527 ] 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "subsystem": "nvmf", 00:19:49.527 "config": [ 00:19:49.527 { 00:19:49.527 "method": "nvmf_set_config", 00:19:49.527 "params": { 00:19:49.527 "discovery_filter": "match_any", 00:19:49.527 "admin_cmd_passthru": { 00:19:49.527 "identify_ctrlr": false 00:19:49.527 }, 00:19:49.527 "dhchap_digests": [ 00:19:49.527 "sha256", 00:19:49.527 "sha384", 00:19:49.527 "sha512" 00:19:49.527 ], 00:19:49.527 "dhchap_dhgroups": [ 00:19:49.527 "null", 00:19:49.527 "ffdhe2048", 00:19:49.527 "ffdhe3072", 00:19:49.527 "ffdhe4096", 00:19:49.527 "ffdhe6144", 00:19:49.527 "ffdhe8192" 00:19:49.527 ] 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "nvmf_set_max_subsystems", 00:19:49.527 "params": { 00:19:49.527 "max_subsystems": 1024 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "nvmf_set_crdt", 00:19:49.527 "params": { 00:19:49.527 "crdt1": 0, 00:19:49.527 "crdt2": 0, 00:19:49.527 "crdt3": 0 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "nvmf_create_transport", 00:19:49.527 "params": { 00:19:49.527 "trtype": "TCP", 00:19:49.527 "max_queue_depth": 128, 00:19:49.527 "max_io_qpairs_per_ctrlr": 127, 00:19:49.527 "in_capsule_data_size": 4096, 00:19:49.527 "max_io_size": 131072, 00:19:49.527 "io_unit_size": 131072, 00:19:49.527 "max_aq_depth": 128, 00:19:49.527 "num_shared_buffers": 511, 00:19:49.527 "buf_cache_size": 4294967295, 00:19:49.527 "dif_insert_or_strip": false, 00:19:49.527 "zcopy": false, 00:19:49.527 "c2h_success": false, 00:19:49.527 "sock_priority": 0, 00:19:49.527 "abort_timeout_sec": 1, 00:19:49.527 "ack_timeout": 0, 00:19:49.527 "data_wr_pool_size": 0 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "nvmf_create_subsystem", 00:19:49.527 "params": { 00:19:49.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.527 "allow_any_host": false, 00:19:49.527 "serial_number": "SPDK00000000000001", 00:19:49.527 "model_number": "SPDK bdev Controller", 00:19:49.527 "max_namespaces": 10, 00:19:49.527 "min_cntlid": 1, 00:19:49.527 "max_cntlid": 65519, 00:19:49.527 "ana_reporting": false 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "nvmf_subsystem_add_host", 00:19:49.527 "params": { 00:19:49.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.527 "host": "nqn.2016-06.io.spdk:host1", 00:19:49.527 "psk": "key0" 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "nvmf_subsystem_add_ns", 00:19:49.527 "params": { 00:19:49.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.527 "namespace": { 00:19:49.527 "nsid": 1, 00:19:49.527 "bdev_name": "malloc0", 00:19:49.527 "nguid": "05574CCEF7C94E3B936014FBE796B5CF", 00:19:49.527 "uuid": "05574cce-f7c9-4e3b-9360-14fbe796b5cf", 00:19:49.527 "no_auto_visible": false 00:19:49.527 } 00:19:49.527 } 00:19:49.527 }, 00:19:49.527 { 00:19:49.527 "method": "nvmf_subsystem_add_listener", 00:19:49.527 "params": { 00:19:49.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.527 "listen_address": { 00:19:49.527 "trtype": "TCP", 00:19:49.527 "adrfam": "IPv4", 00:19:49.527 "traddr": "10.0.0.2", 00:19:49.527 "trsvcid": "4420" 00:19:49.527 }, 00:19:49.527 "secure_channel": true 00:19:49.527 } 00:19:49.527 } 00:19:49.527 ] 00:19:49.527 } 00:19:49.527 ] 00:19:49.527 }' 00:19:49.527 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:49.786 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:49.786 "subsystems": [ 00:19:49.786 { 00:19:49.786 "subsystem": "keyring", 00:19:49.786 "config": [ 00:19:49.786 { 00:19:49.786 "method": "keyring_file_add_key", 00:19:49.786 "params": { 00:19:49.786 "name": "key0", 00:19:49.786 "path": "/tmp/tmp.ASJZDZz3B0" 00:19:49.786 } 00:19:49.786 } 00:19:49.786 ] 00:19:49.786 }, 00:19:49.786 { 00:19:49.786 "subsystem": "iobuf", 00:19:49.786 "config": [ 00:19:49.786 { 00:19:49.786 "method": "iobuf_set_options", 00:19:49.786 "params": { 00:19:49.786 "small_pool_count": 8192, 00:19:49.786 "large_pool_count": 1024, 00:19:49.786 "small_bufsize": 8192, 00:19:49.786 "large_bufsize": 135168, 00:19:49.786 "enable_numa": false 00:19:49.786 } 00:19:49.786 } 00:19:49.786 ] 00:19:49.786 }, 00:19:49.786 { 00:19:49.786 "subsystem": "sock", 00:19:49.786 "config": [ 00:19:49.786 { 00:19:49.786 "method": "sock_set_default_impl", 00:19:49.786 "params": { 00:19:49.786 "impl_name": "posix" 00:19:49.786 } 00:19:49.786 }, 00:19:49.786 { 00:19:49.786 "method": "sock_impl_set_options", 00:19:49.786 "params": { 00:19:49.786 "impl_name": "ssl", 00:19:49.786 "recv_buf_size": 4096, 00:19:49.786 "send_buf_size": 4096, 00:19:49.786 "enable_recv_pipe": true, 00:19:49.786 "enable_quickack": false, 00:19:49.786 "enable_placement_id": 0, 00:19:49.786 "enable_zerocopy_send_server": true, 00:19:49.786 "enable_zerocopy_send_client": false, 00:19:49.786 "zerocopy_threshold": 0, 00:19:49.786 "tls_version": 0, 00:19:49.786 "enable_ktls": false 00:19:49.786 } 00:19:49.786 }, 00:19:49.786 { 00:19:49.786 "method": "sock_impl_set_options", 00:19:49.786 "params": { 00:19:49.786 "impl_name": "posix", 00:19:49.786 "recv_buf_size": 2097152, 00:19:49.786 "send_buf_size": 2097152, 00:19:49.786 "enable_recv_pipe": true, 00:19:49.786 "enable_quickack": false, 00:19:49.786 "enable_placement_id": 0, 00:19:49.786 "enable_zerocopy_send_server": true, 00:19:49.786 "enable_zerocopy_send_client": false, 00:19:49.786 "zerocopy_threshold": 0, 00:19:49.786 "tls_version": 0, 00:19:49.786 "enable_ktls": false 00:19:49.786 } 00:19:49.786 } 00:19:49.786 ] 00:19:49.786 }, 00:19:49.786 { 00:19:49.787 "subsystem": "vmd", 00:19:49.787 "config": [] 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "subsystem": "accel", 00:19:49.787 "config": [ 00:19:49.787 { 00:19:49.787 "method": "accel_set_options", 00:19:49.787 "params": { 00:19:49.787 "small_cache_size": 128, 00:19:49.787 "large_cache_size": 16, 00:19:49.787 "task_count": 2048, 00:19:49.787 "sequence_count": 2048, 00:19:49.787 "buf_count": 2048 00:19:49.787 } 00:19:49.787 } 00:19:49.787 ] 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "subsystem": "bdev", 00:19:49.787 "config": [ 00:19:49.787 { 00:19:49.787 "method": "bdev_set_options", 00:19:49.787 "params": { 00:19:49.787 "bdev_io_pool_size": 65535, 00:19:49.787 "bdev_io_cache_size": 256, 00:19:49.787 "bdev_auto_examine": true, 00:19:49.787 "iobuf_small_cache_size": 128, 00:19:49.787 "iobuf_large_cache_size": 16 00:19:49.787 } 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "method": "bdev_raid_set_options", 00:19:49.787 "params": { 00:19:49.787 "process_window_size_kb": 1024, 00:19:49.787 "process_max_bandwidth_mb_sec": 0 00:19:49.787 } 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "method": "bdev_iscsi_set_options", 00:19:49.787 "params": { 00:19:49.787 "timeout_sec": 30 00:19:49.787 } 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "method": "bdev_nvme_set_options", 00:19:49.787 "params": { 00:19:49.787 "action_on_timeout": "none", 00:19:49.787 "timeout_us": 0, 00:19:49.787 "timeout_admin_us": 0, 00:19:49.787 "keep_alive_timeout_ms": 10000, 00:19:49.787 "arbitration_burst": 0, 00:19:49.787 "low_priority_weight": 0, 00:19:49.787 "medium_priority_weight": 0, 00:19:49.787 "high_priority_weight": 0, 00:19:49.787 "nvme_adminq_poll_period_us": 10000, 00:19:49.787 "nvme_ioq_poll_period_us": 0, 00:19:49.787 "io_queue_requests": 512, 00:19:49.787 "delay_cmd_submit": true, 00:19:49.787 "transport_retry_count": 4, 00:19:49.787 "bdev_retry_count": 3, 00:19:49.787 "transport_ack_timeout": 0, 00:19:49.787 "ctrlr_loss_timeout_sec": 0, 00:19:49.787 "reconnect_delay_sec": 0, 00:19:49.787 "fast_io_fail_timeout_sec": 0, 00:19:49.787 "disable_auto_failback": false, 00:19:49.787 "generate_uuids": false, 00:19:49.787 "transport_tos": 0, 00:19:49.787 "nvme_error_stat": false, 00:19:49.787 "rdma_srq_size": 0, 00:19:49.787 "io_path_stat": false, 00:19:49.787 "allow_accel_sequence": false, 00:19:49.787 "rdma_max_cq_size": 0, 00:19:49.787 "rdma_cm_event_timeout_ms": 0, 00:19:49.787 "dhchap_digests": [ 00:19:49.787 "sha256", 00:19:49.787 "sha384", 00:19:49.787 "sha512" 00:19:49.787 ], 00:19:49.787 "dhchap_dhgroups": [ 00:19:49.787 "null", 00:19:49.787 "ffdhe2048", 00:19:49.787 "ffdhe3072", 00:19:49.787 "ffdhe4096", 00:19:49.787 "ffdhe6144", 00:19:49.787 "ffdhe8192" 00:19:49.787 ] 00:19:49.787 } 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "method": "bdev_nvme_attach_controller", 00:19:49.787 "params": { 00:19:49.787 "name": "TLSTEST", 00:19:49.787 "trtype": "TCP", 00:19:49.787 "adrfam": "IPv4", 00:19:49.787 "traddr": "10.0.0.2", 00:19:49.787 "trsvcid": "4420", 00:19:49.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.787 "prchk_reftag": false, 00:19:49.787 "prchk_guard": false, 00:19:49.787 "ctrlr_loss_timeout_sec": 0, 00:19:49.787 "reconnect_delay_sec": 0, 00:19:49.787 "fast_io_fail_timeout_sec": 0, 00:19:49.787 "psk": "key0", 00:19:49.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.787 "hdgst": false, 00:19:49.787 "ddgst": false, 00:19:49.787 "multipath": "multipath" 00:19:49.787 } 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "method": "bdev_nvme_set_hotplug", 00:19:49.787 "params": { 00:19:49.787 "period_us": 100000, 00:19:49.787 "enable": false 00:19:49.787 } 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "method": "bdev_wait_for_examine" 00:19:49.787 } 00:19:49.787 ] 00:19:49.787 }, 00:19:49.787 { 00:19:49.787 "subsystem": "nbd", 00:19:49.787 "config": [] 00:19:49.787 } 00:19:49.787 ] 00:19:49.787 }' 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3765432 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3765432 ']' 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3765432 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3765432 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3765432' 00:19:49.787 killing process with pid 3765432 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3765432 00:19:49.787 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.787 00:19:49.787 Latency(us) 00:19:49.787 [2024-11-26T18:21:12.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.787 [2024-11-26T18:21:12.901Z] =================================================================================================================== 00:19:49.787 [2024-11-26T18:21:12.901Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3765432 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3764965 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3764965 ']' 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3764965 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.787 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3764965 00:19:50.047 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:50.047 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:50.047 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3764965' 00:19:50.047 killing process with pid 3764965 00:19:50.047 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3764965 00:19:50.047 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3764965 00:19:50.047 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:50.047 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.047 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.047 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:50.047 "subsystems": [ 00:19:50.047 { 00:19:50.047 "subsystem": "keyring", 00:19:50.047 "config": [ 00:19:50.047 { 00:19:50.047 "method": "keyring_file_add_key", 00:19:50.047 "params": { 00:19:50.047 "name": "key0", 00:19:50.047 "path": "/tmp/tmp.ASJZDZz3B0" 00:19:50.047 } 00:19:50.047 } 00:19:50.047 ] 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "subsystem": "iobuf", 00:19:50.047 "config": [ 00:19:50.047 { 00:19:50.047 "method": "iobuf_set_options", 00:19:50.047 "params": { 00:19:50.047 "small_pool_count": 8192, 00:19:50.047 "large_pool_count": 1024, 00:19:50.047 "small_bufsize": 8192, 00:19:50.047 "large_bufsize": 135168, 00:19:50.047 "enable_numa": false 00:19:50.047 } 00:19:50.047 } 00:19:50.047 ] 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "subsystem": "sock", 00:19:50.047 "config": [ 00:19:50.047 { 00:19:50.047 "method": "sock_set_default_impl", 00:19:50.047 "params": { 00:19:50.047 "impl_name": "posix" 00:19:50.047 } 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "method": "sock_impl_set_options", 00:19:50.047 "params": { 00:19:50.047 "impl_name": "ssl", 00:19:50.047 "recv_buf_size": 4096, 00:19:50.047 "send_buf_size": 4096, 00:19:50.047 "enable_recv_pipe": true, 00:19:50.047 "enable_quickack": false, 00:19:50.047 "enable_placement_id": 0, 00:19:50.047 "enable_zerocopy_send_server": true, 00:19:50.047 "enable_zerocopy_send_client": false, 00:19:50.047 "zerocopy_threshold": 0, 00:19:50.047 "tls_version": 0, 00:19:50.047 "enable_ktls": false 00:19:50.047 } 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "method": "sock_impl_set_options", 00:19:50.047 "params": { 00:19:50.047 "impl_name": "posix", 00:19:50.047 "recv_buf_size": 2097152, 00:19:50.047 "send_buf_size": 2097152, 00:19:50.047 "enable_recv_pipe": true, 00:19:50.047 "enable_quickack": false, 00:19:50.047 "enable_placement_id": 0, 00:19:50.047 "enable_zerocopy_send_server": true, 00:19:50.047 "enable_zerocopy_send_client": false, 00:19:50.047 "zerocopy_threshold": 0, 00:19:50.047 "tls_version": 0, 00:19:50.047 "enable_ktls": false 00:19:50.047 } 00:19:50.047 } 00:19:50.047 ] 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "subsystem": "vmd", 00:19:50.047 "config": [] 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "subsystem": "accel", 00:19:50.047 "config": [ 00:19:50.047 { 00:19:50.047 "method": "accel_set_options", 00:19:50.047 "params": { 00:19:50.047 "small_cache_size": 128, 00:19:50.047 "large_cache_size": 16, 00:19:50.047 "task_count": 2048, 00:19:50.047 "sequence_count": 2048, 00:19:50.047 "buf_count": 2048 00:19:50.047 } 00:19:50.047 } 00:19:50.047 ] 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "subsystem": "bdev", 00:19:50.047 "config": [ 00:19:50.047 { 00:19:50.047 "method": "bdev_set_options", 00:19:50.047 "params": { 00:19:50.047 "bdev_io_pool_size": 65535, 00:19:50.047 "bdev_io_cache_size": 256, 00:19:50.047 "bdev_auto_examine": true, 00:19:50.047 "iobuf_small_cache_size": 128, 00:19:50.047 "iobuf_large_cache_size": 16 00:19:50.047 } 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "method": "bdev_raid_set_options", 00:19:50.047 "params": { 00:19:50.047 "process_window_size_kb": 1024, 00:19:50.047 "process_max_bandwidth_mb_sec": 0 00:19:50.047 } 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "method": "bdev_iscsi_set_options", 00:19:50.047 "params": { 00:19:50.047 "timeout_sec": 30 00:19:50.047 } 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "method": "bdev_nvme_set_options", 00:19:50.047 "params": { 00:19:50.047 "action_on_timeout": "none", 00:19:50.047 "timeout_us": 0, 00:19:50.047 "timeout_admin_us": 0, 00:19:50.047 "keep_alive_timeout_ms": 10000, 00:19:50.047 "arbitration_burst": 0, 00:19:50.047 "low_priority_weight": 0, 00:19:50.047 "medium_priority_weight": 0, 00:19:50.047 "high_priority_weight": 0, 00:19:50.047 "nvme_adminq_poll_period_us": 10000, 00:19:50.047 "nvme_ioq_poll_period_us": 0, 00:19:50.047 "io_queue_requests": 0, 00:19:50.047 "delay_cmd_submit": true, 00:19:50.047 "transport_retry_count": 4, 00:19:50.047 "bdev_retry_count": 3, 00:19:50.047 "transport_ack_timeout": 0, 00:19:50.047 "ctrlr_loss_timeout_sec": 0, 00:19:50.047 "reconnect_delay_sec": 0, 00:19:50.047 "fast_io_fail_timeout_sec": 0, 00:19:50.047 "disable_auto_failback": false, 00:19:50.047 "generate_uuids": false, 00:19:50.047 "transport_tos": 0, 00:19:50.047 "nvme_error_stat": false, 00:19:50.047 "rdma_srq_size": 0, 00:19:50.047 "io_path_stat": false, 00:19:50.047 "allow_accel_sequence": false, 00:19:50.047 "rdma_max_cq_size": 0, 00:19:50.047 "rdma_cm_event_timeout_ms": 0, 00:19:50.047 "dhchap_digests": [ 00:19:50.047 "sha256", 00:19:50.047 "sha384", 00:19:50.047 "sha512" 00:19:50.047 ], 00:19:50.047 "dhchap_dhgroups": [ 00:19:50.047 "null", 00:19:50.047 "ffdhe2048", 00:19:50.047 "ffdhe3072", 00:19:50.047 "ffdhe4096", 00:19:50.047 "ffdhe6144", 00:19:50.047 "ffdhe8192" 00:19:50.047 ] 00:19:50.047 } 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "method": "bdev_nvme_set_hotplug", 00:19:50.047 "params": { 00:19:50.047 "period_us": 100000, 00:19:50.047 "enable": false 00:19:50.047 } 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "method": "bdev_malloc_create", 00:19:50.047 "params": { 00:19:50.047 "name": "malloc0", 00:19:50.047 "num_blocks": 8192, 00:19:50.047 "block_size": 4096, 00:19:50.047 "physical_block_size": 4096, 00:19:50.047 "uuid": "05574cce-f7c9-4e3b-9360-14fbe796b5cf", 00:19:50.047 "optimal_io_boundary": 0, 00:19:50.047 "md_size": 0, 00:19:50.047 "dif_type": 0, 00:19:50.047 "dif_is_head_of_md": false, 00:19:50.047 "dif_pi_format": 0 00:19:50.047 } 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "method": "bdev_wait_for_examine" 00:19:50.047 } 00:19:50.047 ] 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "subsystem": "nbd", 00:19:50.047 "config": [] 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "subsystem": "scheduler", 00:19:50.047 "config": [ 00:19:50.047 { 00:19:50.047 "method": "framework_set_scheduler", 00:19:50.047 "params": { 00:19:50.047 "name": "static" 00:19:50.047 } 00:19:50.047 } 00:19:50.047 ] 00:19:50.047 }, 00:19:50.047 { 00:19:50.047 "subsystem": "nvmf", 00:19:50.047 "config": [ 00:19:50.047 { 00:19:50.047 "method": "nvmf_set_config", 00:19:50.047 "params": { 00:19:50.047 "discovery_filter": "match_any", 00:19:50.047 "admin_cmd_passthru": { 00:19:50.047 "identify_ctrlr": false 00:19:50.047 }, 00:19:50.047 "dhchap_digests": [ 00:19:50.047 "sha256", 00:19:50.047 "sha384", 00:19:50.047 "sha512" 00:19:50.047 ], 00:19:50.047 "dhchap_dhgroups": [ 00:19:50.047 "null", 00:19:50.047 "ffdhe2048", 00:19:50.047 "ffdhe3072", 00:19:50.047 "ffdhe4096", 00:19:50.048 "ffdhe6144", 00:19:50.048 "ffdhe8192" 00:19:50.048 ] 00:19:50.048 } 00:19:50.048 }, 00:19:50.048 { 00:19:50.048 "method": "nvmf_set_max_subsystems", 00:19:50.048 "params": { 00:19:50.048 "max_subsystems": 1024 00:19:50.048 } 00:19:50.048 }, 00:19:50.048 { 00:19:50.048 "method": "nvmf_set_crdt", 00:19:50.048 "params": { 00:19:50.048 "crdt1": 0, 00:19:50.048 "crdt2": 0, 00:19:50.048 "crdt3": 0 00:19:50.048 } 00:19:50.048 }, 00:19:50.048 { 00:19:50.048 "method": "nvmf_create_transport", 00:19:50.048 "params": { 00:19:50.048 "trtype": "TCP", 00:19:50.048 "max_queue_depth": 128, 00:19:50.048 "max_io_qpairs_per_ctrlr": 127, 00:19:50.048 "in_capsule_data_size": 4096, 00:19:50.048 "max_io_size": 131072, 00:19:50.048 "io_unit_size": 131072, 00:19:50.048 "max_aq_depth": 128, 00:19:50.048 "num_shared_buffers": 511, 00:19:50.048 "buf_cache_size": 4294967295, 00:19:50.048 "dif_insert_or_strip": false, 00:19:50.048 "zcopy": false, 00:19:50.048 "c2h_success": false, 00:19:50.048 "sock_priority": 0, 00:19:50.048 "abort_timeout_sec": 1, 00:19:50.048 "ack_timeout": 0, 00:19:50.048 "data_wr_pool_size": 0 00:19:50.048 } 00:19:50.048 }, 00:19:50.048 { 00:19:50.048 "method": "nvmf_create_subsystem", 00:19:50.048 "params": { 00:19:50.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.048 "allow_any_host": false, 00:19:50.048 "serial_number": "SPDK00000000000001", 00:19:50.048 "model_number": "SPDK bdev Controller", 00:19:50.048 "max_namespaces": 10, 00:19:50.048 "min_cntlid": 1, 00:19:50.048 "max_cntlid": 65519, 00:19:50.048 "ana_reporting": false 00:19:50.048 } 00:19:50.048 }, 00:19:50.048 { 00:19:50.048 "method": "nvmf_subsystem_add_host", 00:19:50.048 "params": { 00:19:50.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.048 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.048 "psk": "key0" 00:19:50.048 } 00:19:50.048 }, 00:19:50.048 { 00:19:50.048 "method": "nvmf_subsystem_add_ns", 00:19:50.048 "params": { 00:19:50.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.048 "namespace": { 00:19:50.048 "nsid": 1, 00:19:50.048 "bdev_name": "malloc0", 00:19:50.048 "nguid": "05574CCEF7C94E3B936014FBE796B5CF", 00:19:50.048 "uuid": "05574cce-f7c9-4e3b-9360-14fbe796b5cf", 00:19:50.048 "no_auto_visible": false 00:19:50.048 } 00:19:50.048 } 00:19:50.048 }, 00:19:50.048 { 00:19:50.048 "method": "nvmf_subsystem_add_listener", 00:19:50.048 "params": { 00:19:50.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.048 "listen_address": { 00:19:50.048 "trtype": "TCP", 00:19:50.048 "adrfam": "IPv4", 00:19:50.048 "traddr": "10.0.0.2", 00:19:50.048 "trsvcid": "4420" 00:19:50.048 }, 00:19:50.048 "secure_channel": true 00:19:50.048 } 00:19:50.048 } 00:19:50.048 ] 00:19:50.048 } 00:19:50.048 ] 00:19:50.048 }' 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3765820 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3765820 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3765820 ']' 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.048 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.048 [2024-11-26 19:21:13.154066] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:50.048 [2024-11-26 19:21:13.154112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.307 [2024-11-26 19:21:13.226305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.307 [2024-11-26 19:21:13.266457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.307 [2024-11-26 19:21:13.266496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.307 [2024-11-26 19:21:13.266503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.307 [2024-11-26 19:21:13.266509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.307 [2024-11-26 19:21:13.266514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.307 [2024-11-26 19:21:13.267104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.566 [2024-11-26 19:21:13.480187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.566 [2024-11-26 19:21:13.512212] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:50.566 [2024-11-26 19:21:13.512415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.133 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.133 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.133 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.133 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.133 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3765855 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3765855 /var/tmp/bdevperf.sock 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3765855 ']' 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.133 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:51.133 "subsystems": [ 00:19:51.133 { 00:19:51.133 "subsystem": "keyring", 00:19:51.133 "config": [ 00:19:51.133 { 00:19:51.133 "method": "keyring_file_add_key", 00:19:51.133 "params": { 00:19:51.133 "name": "key0", 00:19:51.133 "path": "/tmp/tmp.ASJZDZz3B0" 00:19:51.133 } 00:19:51.133 } 00:19:51.133 ] 00:19:51.133 }, 00:19:51.133 { 00:19:51.133 "subsystem": "iobuf", 00:19:51.133 "config": [ 00:19:51.133 { 00:19:51.133 "method": "iobuf_set_options", 00:19:51.133 "params": { 00:19:51.133 "small_pool_count": 8192, 00:19:51.133 "large_pool_count": 1024, 00:19:51.133 "small_bufsize": 8192, 00:19:51.133 "large_bufsize": 135168, 00:19:51.133 "enable_numa": false 00:19:51.133 } 00:19:51.133 } 00:19:51.133 ] 00:19:51.133 }, 00:19:51.133 { 00:19:51.133 "subsystem": "sock", 00:19:51.133 "config": [ 00:19:51.133 { 00:19:51.133 "method": "sock_set_default_impl", 00:19:51.133 "params": { 00:19:51.133 "impl_name": "posix" 00:19:51.133 } 00:19:51.133 }, 00:19:51.133 { 00:19:51.133 "method": "sock_impl_set_options", 00:19:51.133 "params": { 00:19:51.133 "impl_name": "ssl", 00:19:51.134 "recv_buf_size": 4096, 00:19:51.134 "send_buf_size": 4096, 00:19:51.134 "enable_recv_pipe": true, 00:19:51.134 "enable_quickack": false, 00:19:51.134 "enable_placement_id": 0, 00:19:51.134 "enable_zerocopy_send_server": true, 00:19:51.134 "enable_zerocopy_send_client": false, 00:19:51.134 "zerocopy_threshold": 0, 00:19:51.134 "tls_version": 0, 00:19:51.134 "enable_ktls": false 00:19:51.134 } 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "method": "sock_impl_set_options", 00:19:51.134 "params": { 00:19:51.134 "impl_name": "posix", 00:19:51.134 "recv_buf_size": 2097152, 00:19:51.134 "send_buf_size": 2097152, 00:19:51.134 "enable_recv_pipe": true, 00:19:51.134 "enable_quickack": false, 00:19:51.134 "enable_placement_id": 0, 00:19:51.134 "enable_zerocopy_send_server": true, 00:19:51.134 "enable_zerocopy_send_client": false, 00:19:51.134 "zerocopy_threshold": 0, 00:19:51.134 "tls_version": 0, 00:19:51.134 "enable_ktls": false 00:19:51.134 } 00:19:51.134 } 00:19:51.134 ] 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "subsystem": "vmd", 00:19:51.134 "config": [] 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "subsystem": "accel", 00:19:51.134 "config": [ 00:19:51.134 { 00:19:51.134 "method": "accel_set_options", 00:19:51.134 "params": { 00:19:51.134 "small_cache_size": 128, 00:19:51.134 "large_cache_size": 16, 00:19:51.134 "task_count": 2048, 00:19:51.134 "sequence_count": 2048, 00:19:51.134 "buf_count": 2048 00:19:51.134 } 00:19:51.134 } 00:19:51.134 ] 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "subsystem": "bdev", 00:19:51.134 "config": [ 00:19:51.134 { 00:19:51.134 "method": "bdev_set_options", 00:19:51.134 "params": { 00:19:51.134 "bdev_io_pool_size": 65535, 00:19:51.134 "bdev_io_cache_size": 256, 00:19:51.134 "bdev_auto_examine": true, 00:19:51.134 "iobuf_small_cache_size": 128, 00:19:51.134 "iobuf_large_cache_size": 16 00:19:51.134 } 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "method": "bdev_raid_set_options", 00:19:51.134 "params": { 00:19:51.134 "process_window_size_kb": 1024, 00:19:51.134 "process_max_bandwidth_mb_sec": 0 00:19:51.134 } 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "method": "bdev_iscsi_set_options", 00:19:51.134 "params": { 00:19:51.134 "timeout_sec": 30 00:19:51.134 } 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "method": "bdev_nvme_set_options", 00:19:51.134 "params": { 00:19:51.134 "action_on_timeout": "none", 00:19:51.134 "timeout_us": 0, 00:19:51.134 "timeout_admin_us": 0, 00:19:51.134 "keep_alive_timeout_ms": 10000, 00:19:51.134 "arbitration_burst": 0, 00:19:51.134 "low_priority_weight": 0, 00:19:51.134 "medium_priority_weight": 0, 00:19:51.134 "high_priority_weight": 0, 00:19:51.134 "nvme_adminq_poll_period_us": 10000, 00:19:51.134 "nvme_ioq_poll_period_us": 0, 00:19:51.134 "io_queue_requests": 512, 00:19:51.134 "delay_cmd_submit": true, 00:19:51.134 "transport_retry_count": 4, 00:19:51.134 "bdev_retry_count": 3, 00:19:51.134 "transport_ack_timeout": 0, 00:19:51.134 "ctrlr_loss_timeout_sec": 0, 00:19:51.134 "reconnect_delay_sec": 0, 00:19:51.134 "fast_io_fail_timeout_sec": 0, 00:19:51.134 "disable_auto_failback": false, 00:19:51.134 "generate_uuids": false, 00:19:51.134 "transport_tos": 0, 00:19:51.134 "nvme_error_stat": false, 00:19:51.134 "rdma_srq_size": 0, 00:19:51.134 "io_path_stat": false, 00:19:51.134 "allow_accel_sequence": false, 00:19:51.134 "rdma_max_cq_size": 0, 00:19:51.134 "rdma_cm_event_timeout_ms": 0, 00:19:51.134 "dhchap_digests": [ 00:19:51.134 "sha256", 00:19:51.134 "sha384", 00:19:51.134 "sha512" 00:19:51.134 ], 00:19:51.134 "dhchap_dhgroups": [ 00:19:51.134 "null", 00:19:51.134 "ffdhe2048", 00:19:51.134 "ffdhe3072", 00:19:51.134 "ffdhe4096", 00:19:51.134 "ffdhe6144", 00:19:51.134 "ffdhe8192" 00:19:51.134 ] 00:19:51.134 } 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "method": "bdev_nvme_attach_controller", 00:19:51.134 "params": { 00:19:51.134 "name": "TLSTEST", 00:19:51.134 "trtype": "TCP", 00:19:51.134 "adrfam": "IPv4", 00:19:51.134 "traddr": "10.0.0.2", 00:19:51.134 "trsvcid": "4420", 00:19:51.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.134 "prchk_reftag": false, 00:19:51.134 "prchk_guard": false, 00:19:51.134 "ctrlr_loss_timeout_sec": 0, 00:19:51.134 "reconnect_delay_sec": 0, 00:19:51.134 "fast_io_fail_timeout_sec": 0, 00:19:51.134 "psk": "key0", 00:19:51.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.134 "hdgst": false, 00:19:51.134 "ddgst": false, 00:19:51.134 "multipath": "multipath" 00:19:51.134 } 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "method": "bdev_nvme_set_hotplug", 00:19:51.134 "params": { 00:19:51.134 "period_us": 100000, 00:19:51.134 "enable": false 00:19:51.134 } 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "method": "bdev_wait_for_examine" 00:19:51.134 } 00:19:51.134 ] 00:19:51.134 }, 00:19:51.134 { 00:19:51.134 "subsystem": "nbd", 00:19:51.134 "config": [] 00:19:51.134 } 00:19:51.134 ] 00:19:51.134 }' 00:19:51.134 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.134 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.134 [2024-11-26 19:21:14.068905] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:19:51.134 [2024-11-26 19:21:14.068953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765855 ] 00:19:51.134 [2024-11-26 19:21:14.141445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.134 [2024-11-26 19:21:14.180782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.392 [2024-11-26 19:21:14.332633] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.957 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.957 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.957 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:51.957 Running I/O for 10 seconds... 00:19:53.895 5418.00 IOPS, 21.16 MiB/s [2024-11-26T18:21:18.384Z] 5449.00 IOPS, 21.29 MiB/s [2024-11-26T18:21:19.318Z] 5510.33 IOPS, 21.52 MiB/s [2024-11-26T18:21:20.253Z] 5492.25 IOPS, 21.45 MiB/s [2024-11-26T18:21:21.187Z] 5483.80 IOPS, 21.42 MiB/s [2024-11-26T18:21:22.121Z] 5503.50 IOPS, 21.50 MiB/s [2024-11-26T18:21:23.057Z] 5498.71 IOPS, 21.48 MiB/s [2024-11-26T18:21:24.432Z] 5493.38 IOPS, 21.46 MiB/s [2024-11-26T18:21:25.365Z] 5445.33 IOPS, 21.27 MiB/s [2024-11-26T18:21:25.365Z] 5402.10 IOPS, 21.10 MiB/s 00:20:02.251 Latency(us) 00:20:02.251 [2024-11-26T18:21:25.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.251 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.251 Verification LBA range: start 0x0 length 0x2000 00:20:02.251 TLSTESTn1 : 10.02 5405.68 21.12 0.00 0.00 23643.13 6553.60 33454.57 00:20:02.251 [2024-11-26T18:21:25.365Z] =================================================================================================================== 00:20:02.251 [2024-11-26T18:21:25.365Z] Total : 5405.68 21.12 0.00 0.00 23643.13 6553.60 33454.57 00:20:02.251 { 00:20:02.251 "results": [ 00:20:02.251 { 00:20:02.251 "job": "TLSTESTn1", 00:20:02.251 "core_mask": "0x4", 00:20:02.251 "workload": "verify", 00:20:02.251 "status": "finished", 00:20:02.251 "verify_range": { 00:20:02.251 "start": 0, 00:20:02.251 "length": 8192 00:20:02.251 }, 00:20:02.251 "queue_depth": 128, 00:20:02.251 "io_size": 4096, 00:20:02.251 "runtime": 10.016681, 00:20:02.251 "iops": 5405.682780553759, 00:20:02.251 "mibps": 21.11594836153812, 00:20:02.251 "io_failed": 0, 00:20:02.251 "io_timeout": 0, 00:20:02.251 "avg_latency_us": 23643.12924319775, 00:20:02.251 "min_latency_us": 6553.6, 00:20:02.251 "max_latency_us": 33454.56761904762 00:20:02.251 } 00:20:02.251 ], 00:20:02.251 "core_count": 1 00:20:02.251 } 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3765855 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3765855 ']' 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3765855 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3765855 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3765855' 00:20:02.251 killing process with pid 3765855 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3765855 00:20:02.251 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.251 00:20:02.251 Latency(us) 00:20:02.251 [2024-11-26T18:21:25.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.251 [2024-11-26T18:21:25.365Z] =================================================================================================================== 00:20:02.251 [2024-11-26T18:21:25.365Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3765855 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3765820 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3765820 ']' 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3765820 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3765820 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3765820' 00:20:02.251 killing process with pid 3765820 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3765820 00:20:02.251 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3765820 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3767708 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3767708 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3767708 ']' 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.509 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.509 [2024-11-26 19:21:25.553927] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:02.509 [2024-11-26 19:21:25.553972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.767 [2024-11-26 19:21:25.629916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.767 [2024-11-26 19:21:25.667795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.767 [2024-11-26 19:21:25.667831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.767 [2024-11-26 19:21:25.667838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.767 [2024-11-26 19:21:25.667844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.767 [2024-11-26 19:21:25.667850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.767 [2024-11-26 19:21:25.668419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ASJZDZz3B0 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ASJZDZz3B0 00:20:02.767 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.025 [2024-11-26 19:21:25.976897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.025 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.282 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.282 [2024-11-26 19:21:26.365894] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.282 [2024-11-26 19:21:26.366104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.282 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.540 malloc0 00:20:03.540 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.798 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:20:04.065 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3768111 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3768111 /var/tmp/bdevperf.sock 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3768111 ']' 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.323 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.323 [2024-11-26 19:21:27.230316] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:04.323 [2024-11-26 19:21:27.230365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768111 ] 00:20:04.323 [2024-11-26 19:21:27.304885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.323 [2024-11-26 19:21:27.345055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.581 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.581 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.581 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:20:04.581 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:04.839 [2024-11-26 19:21:27.801802] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.839 nvme0n1 00:20:04.839 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.096 Running I/O for 1 seconds... 00:20:06.029 5303.00 IOPS, 20.71 MiB/s 00:20:06.029 Latency(us) 00:20:06.029 [2024-11-26T18:21:29.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.029 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:06.029 Verification LBA range: start 0x0 length 0x2000 00:20:06.029 nvme0n1 : 1.02 5346.59 20.89 0.00 0.00 23780.96 6179.11 29210.33 00:20:06.029 [2024-11-26T18:21:29.143Z] =================================================================================================================== 00:20:06.029 [2024-11-26T18:21:29.143Z] Total : 5346.59 20.89 0.00 0.00 23780.96 6179.11 29210.33 00:20:06.029 { 00:20:06.029 "results": [ 00:20:06.029 { 00:20:06.029 "job": "nvme0n1", 00:20:06.029 "core_mask": "0x2", 00:20:06.029 "workload": "verify", 00:20:06.029 "status": "finished", 00:20:06.029 "verify_range": { 00:20:06.029 "start": 0, 00:20:06.029 "length": 8192 00:20:06.029 }, 00:20:06.029 "queue_depth": 128, 00:20:06.029 "io_size": 4096, 00:20:06.029 "runtime": 1.015787, 00:20:06.029 "iops": 5346.593331082206, 00:20:06.029 "mibps": 20.885130199539866, 00:20:06.029 "io_failed": 0, 00:20:06.029 "io_timeout": 0, 00:20:06.029 "avg_latency_us": 23780.961339751513, 00:20:06.029 "min_latency_us": 6179.108571428572, 00:20:06.029 "max_latency_us": 29210.33142857143 00:20:06.029 } 00:20:06.029 ], 00:20:06.029 "core_count": 1 00:20:06.029 } 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3768111 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3768111 ']' 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3768111 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768111 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768111' 00:20:06.029 killing process with pid 3768111 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3768111 00:20:06.029 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.029 00:20:06.029 Latency(us) 00:20:06.029 [2024-11-26T18:21:29.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.029 [2024-11-26T18:21:29.143Z] =================================================================================================================== 00:20:06.029 [2024-11-26T18:21:29.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.029 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3768111 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3767708 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3767708 ']' 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3767708 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3767708 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3767708' 00:20:06.288 killing process with pid 3767708 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3767708 00:20:06.288 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3767708 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3768420 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3768420 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3768420 ']' 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.547 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.547 [2024-11-26 19:21:29.517981] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:06.547 [2024-11-26 19:21:29.518029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.547 [2024-11-26 19:21:29.594855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.547 [2024-11-26 19:21:29.629850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.547 [2024-11-26 19:21:29.629885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.547 [2024-11-26 19:21:29.629892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.547 [2024-11-26 19:21:29.629898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.547 [2024-11-26 19:21:29.629903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.547 [2024-11-26 19:21:29.630454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.805 [2024-11-26 19:21:29.774360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.805 malloc0 00:20:06.805 [2024-11-26 19:21:29.802526] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.805 [2024-11-26 19:21:29.802748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3768494 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3768494 /var/tmp/bdevperf.sock 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3768494 ']' 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.805 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.806 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.806 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.806 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 [2024-11-26 19:21:29.876353] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:06.806 [2024-11-26 19:21:29.876394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768494 ] 00:20:07.065 [2024-11-26 19:21:29.948512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.065 [2024-11-26 19:21:29.990371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.065 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.065 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.065 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ASJZDZz3B0 00:20:07.324 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:07.583 [2024-11-26 19:21:30.453538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.583 nvme0n1 00:20:07.583 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:07.583 Running I/O for 1 seconds... 00:20:08.958 5500.00 IOPS, 21.48 MiB/s 00:20:08.958 Latency(us) 00:20:08.958 [2024-11-26T18:21:32.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.958 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:08.958 Verification LBA range: start 0x0 length 0x2000 00:20:08.958 nvme0n1 : 1.01 5549.71 21.68 0.00 0.00 22909.13 5867.03 22594.32 00:20:08.958 [2024-11-26T18:21:32.072Z] =================================================================================================================== 00:20:08.958 [2024-11-26T18:21:32.072Z] Total : 5549.71 21.68 0.00 0.00 22909.13 5867.03 22594.32 00:20:08.958 { 00:20:08.958 "results": [ 00:20:08.958 { 00:20:08.958 "job": "nvme0n1", 00:20:08.958 "core_mask": "0x2", 00:20:08.958 "workload": "verify", 00:20:08.958 "status": "finished", 00:20:08.958 "verify_range": { 00:20:08.958 "start": 0, 00:20:08.958 "length": 8192 00:20:08.958 }, 00:20:08.958 "queue_depth": 128, 00:20:08.958 "io_size": 4096, 00:20:08.958 "runtime": 1.014288, 00:20:08.958 "iops": 5549.705803479879, 00:20:08.958 "mibps": 21.67853829484328, 00:20:08.958 "io_failed": 0, 00:20:08.958 "io_timeout": 0, 00:20:08.958 "avg_latency_us": 22909.128383794803, 00:20:08.958 "min_latency_us": 5867.032380952381, 00:20:08.958 "max_latency_us": 22594.31619047619 00:20:08.958 } 00:20:08.958 ], 00:20:08.958 "core_count": 1 00:20:08.958 } 00:20:08.958 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:08.958 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.958 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.958 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.958 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:08.958 "subsystems": [ 00:20:08.958 { 00:20:08.958 "subsystem": "keyring", 00:20:08.958 "config": [ 00:20:08.958 { 00:20:08.958 "method": "keyring_file_add_key", 00:20:08.958 "params": { 00:20:08.958 "name": "key0", 00:20:08.958 "path": "/tmp/tmp.ASJZDZz3B0" 00:20:08.958 } 00:20:08.958 } 00:20:08.958 ] 00:20:08.958 }, 00:20:08.958 { 00:20:08.958 "subsystem": "iobuf", 00:20:08.958 "config": [ 00:20:08.958 { 00:20:08.958 "method": "iobuf_set_options", 00:20:08.958 "params": { 00:20:08.958 "small_pool_count": 8192, 00:20:08.958 "large_pool_count": 1024, 00:20:08.958 "small_bufsize": 8192, 00:20:08.958 "large_bufsize": 135168, 00:20:08.958 "enable_numa": false 00:20:08.958 } 00:20:08.958 } 00:20:08.958 ] 00:20:08.958 }, 00:20:08.958 { 00:20:08.958 "subsystem": "sock", 00:20:08.958 "config": [ 00:20:08.958 { 00:20:08.958 "method": "sock_set_default_impl", 00:20:08.958 "params": { 00:20:08.958 "impl_name": "posix" 00:20:08.958 } 00:20:08.958 }, 00:20:08.958 { 00:20:08.958 "method": "sock_impl_set_options", 00:20:08.958 "params": { 00:20:08.958 "impl_name": "ssl", 00:20:08.958 "recv_buf_size": 4096, 00:20:08.958 "send_buf_size": 4096, 00:20:08.958 "enable_recv_pipe": true, 00:20:08.958 "enable_quickack": false, 00:20:08.958 "enable_placement_id": 0, 00:20:08.958 "enable_zerocopy_send_server": true, 00:20:08.958 "enable_zerocopy_send_client": false, 00:20:08.958 "zerocopy_threshold": 0, 00:20:08.958 "tls_version": 0, 00:20:08.958 "enable_ktls": false 00:20:08.958 } 00:20:08.958 }, 00:20:08.958 { 00:20:08.958 "method": "sock_impl_set_options", 00:20:08.958 "params": { 00:20:08.958 "impl_name": "posix", 00:20:08.958 "recv_buf_size": 2097152, 00:20:08.958 "send_buf_size": 2097152, 00:20:08.958 "enable_recv_pipe": true, 00:20:08.958 "enable_quickack": false, 00:20:08.958 "enable_placement_id": 0, 00:20:08.958 "enable_zerocopy_send_server": true, 00:20:08.958 "enable_zerocopy_send_client": false, 00:20:08.958 "zerocopy_threshold": 0, 00:20:08.958 "tls_version": 0, 00:20:08.958 "enable_ktls": false 00:20:08.958 } 00:20:08.958 } 00:20:08.958 ] 00:20:08.958 }, 00:20:08.958 { 00:20:08.958 "subsystem": "vmd", 00:20:08.958 "config": [] 00:20:08.958 }, 00:20:08.958 { 00:20:08.958 "subsystem": "accel", 00:20:08.958 "config": [ 00:20:08.958 { 00:20:08.958 "method": "accel_set_options", 00:20:08.958 "params": { 00:20:08.958 "small_cache_size": 128, 00:20:08.958 "large_cache_size": 16, 00:20:08.958 "task_count": 2048, 00:20:08.958 "sequence_count": 2048, 00:20:08.958 "buf_count": 2048 00:20:08.958 } 00:20:08.958 } 00:20:08.958 ] 00:20:08.958 }, 00:20:08.958 { 00:20:08.958 "subsystem": "bdev", 00:20:08.958 "config": [ 00:20:08.958 { 00:20:08.958 "method": "bdev_set_options", 00:20:08.958 "params": { 00:20:08.958 "bdev_io_pool_size": 65535, 00:20:08.958 "bdev_io_cache_size": 256, 00:20:08.958 "bdev_auto_examine": true, 00:20:08.958 "iobuf_small_cache_size": 128, 00:20:08.958 "iobuf_large_cache_size": 16 00:20:08.958 } 00:20:08.958 }, 00:20:08.958 { 00:20:08.958 "method": "bdev_raid_set_options", 00:20:08.958 "params": { 00:20:08.958 "process_window_size_kb": 1024, 00:20:08.958 "process_max_bandwidth_mb_sec": 0 00:20:08.958 } 00:20:08.958 }, 00:20:08.958 { 00:20:08.959 "method": "bdev_iscsi_set_options", 00:20:08.959 "params": { 00:20:08.959 "timeout_sec": 30 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "bdev_nvme_set_options", 00:20:08.959 "params": { 00:20:08.959 "action_on_timeout": "none", 00:20:08.959 "timeout_us": 0, 00:20:08.959 "timeout_admin_us": 0, 00:20:08.959 "keep_alive_timeout_ms": 10000, 00:20:08.959 "arbitration_burst": 0, 00:20:08.959 "low_priority_weight": 0, 00:20:08.959 "medium_priority_weight": 0, 00:20:08.959 "high_priority_weight": 0, 00:20:08.959 "nvme_adminq_poll_period_us": 10000, 00:20:08.959 "nvme_ioq_poll_period_us": 0, 00:20:08.959 "io_queue_requests": 0, 00:20:08.959 "delay_cmd_submit": true, 00:20:08.959 "transport_retry_count": 4, 00:20:08.959 "bdev_retry_count": 3, 00:20:08.959 "transport_ack_timeout": 0, 00:20:08.959 "ctrlr_loss_timeout_sec": 0, 00:20:08.959 "reconnect_delay_sec": 0, 00:20:08.959 "fast_io_fail_timeout_sec": 0, 00:20:08.959 "disable_auto_failback": false, 00:20:08.959 "generate_uuids": false, 00:20:08.959 "transport_tos": 0, 00:20:08.959 "nvme_error_stat": false, 00:20:08.959 "rdma_srq_size": 0, 00:20:08.959 "io_path_stat": false, 00:20:08.959 "allow_accel_sequence": false, 00:20:08.959 "rdma_max_cq_size": 0, 00:20:08.959 "rdma_cm_event_timeout_ms": 0, 00:20:08.959 "dhchap_digests": [ 00:20:08.959 "sha256", 00:20:08.959 "sha384", 00:20:08.959 "sha512" 00:20:08.959 ], 00:20:08.959 "dhchap_dhgroups": [ 00:20:08.959 "null", 00:20:08.959 "ffdhe2048", 00:20:08.959 "ffdhe3072", 00:20:08.959 "ffdhe4096", 00:20:08.959 "ffdhe6144", 00:20:08.959 "ffdhe8192" 00:20:08.959 ] 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "bdev_nvme_set_hotplug", 00:20:08.959 "params": { 00:20:08.959 "period_us": 100000, 00:20:08.959 "enable": false 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "bdev_malloc_create", 00:20:08.959 "params": { 00:20:08.959 "name": "malloc0", 00:20:08.959 "num_blocks": 8192, 00:20:08.959 "block_size": 4096, 00:20:08.959 "physical_block_size": 4096, 00:20:08.959 "uuid": "c420ac90-6e8a-4e36-9407-b08501057e74", 00:20:08.959 "optimal_io_boundary": 0, 00:20:08.959 "md_size": 0, 00:20:08.959 "dif_type": 0, 00:20:08.959 "dif_is_head_of_md": false, 00:20:08.959 "dif_pi_format": 0 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "bdev_wait_for_examine" 00:20:08.959 } 00:20:08.959 ] 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "subsystem": "nbd", 00:20:08.959 "config": [] 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "subsystem": "scheduler", 00:20:08.959 "config": [ 00:20:08.959 { 00:20:08.959 "method": "framework_set_scheduler", 00:20:08.959 "params": { 00:20:08.959 "name": "static" 00:20:08.959 } 00:20:08.959 } 00:20:08.959 ] 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "subsystem": "nvmf", 00:20:08.959 "config": [ 00:20:08.959 { 00:20:08.959 "method": "nvmf_set_config", 00:20:08.959 "params": { 00:20:08.959 "discovery_filter": "match_any", 00:20:08.959 "admin_cmd_passthru": { 00:20:08.959 "identify_ctrlr": false 00:20:08.959 }, 00:20:08.959 "dhchap_digests": [ 00:20:08.959 "sha256", 00:20:08.959 "sha384", 00:20:08.959 "sha512" 00:20:08.959 ], 00:20:08.959 "dhchap_dhgroups": [ 00:20:08.959 "null", 00:20:08.959 "ffdhe2048", 00:20:08.959 "ffdhe3072", 00:20:08.959 "ffdhe4096", 00:20:08.959 "ffdhe6144", 00:20:08.959 "ffdhe8192" 00:20:08.959 ] 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "nvmf_set_max_subsystems", 00:20:08.959 "params": { 00:20:08.959 "max_subsystems": 1024 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "nvmf_set_crdt", 00:20:08.959 "params": { 00:20:08.959 "crdt1": 0, 00:20:08.959 "crdt2": 0, 00:20:08.959 "crdt3": 0 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "nvmf_create_transport", 00:20:08.959 "params": { 00:20:08.959 "trtype": "TCP", 00:20:08.959 "max_queue_depth": 128, 00:20:08.959 "max_io_qpairs_per_ctrlr": 127, 00:20:08.959 "in_capsule_data_size": 4096, 00:20:08.959 "max_io_size": 131072, 00:20:08.959 "io_unit_size": 131072, 00:20:08.959 "max_aq_depth": 128, 00:20:08.959 "num_shared_buffers": 511, 00:20:08.959 "buf_cache_size": 4294967295, 00:20:08.959 "dif_insert_or_strip": false, 00:20:08.959 "zcopy": false, 00:20:08.959 "c2h_success": false, 00:20:08.959 "sock_priority": 0, 00:20:08.959 "abort_timeout_sec": 1, 00:20:08.959 "ack_timeout": 0, 00:20:08.959 "data_wr_pool_size": 0 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "nvmf_create_subsystem", 00:20:08.959 "params": { 00:20:08.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.959 "allow_any_host": false, 00:20:08.959 "serial_number": "00000000000000000000", 00:20:08.959 "model_number": "SPDK bdev Controller", 00:20:08.959 "max_namespaces": 32, 00:20:08.959 "min_cntlid": 1, 00:20:08.959 "max_cntlid": 65519, 00:20:08.959 "ana_reporting": false 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "nvmf_subsystem_add_host", 00:20:08.959 "params": { 00:20:08.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.959 "host": "nqn.2016-06.io.spdk:host1", 00:20:08.959 "psk": "key0" 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "nvmf_subsystem_add_ns", 00:20:08.959 "params": { 00:20:08.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.959 "namespace": { 00:20:08.959 "nsid": 1, 00:20:08.959 "bdev_name": "malloc0", 00:20:08.959 "nguid": "C420AC906E8A4E369407B08501057E74", 00:20:08.959 "uuid": "c420ac90-6e8a-4e36-9407-b08501057e74", 00:20:08.959 "no_auto_visible": false 00:20:08.959 } 00:20:08.959 } 00:20:08.959 }, 00:20:08.959 { 00:20:08.959 "method": "nvmf_subsystem_add_listener", 00:20:08.959 "params": { 00:20:08.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.959 "listen_address": { 00:20:08.959 "trtype": "TCP", 00:20:08.959 "adrfam": "IPv4", 00:20:08.959 "traddr": "10.0.0.2", 00:20:08.959 "trsvcid": "4420" 00:20:08.959 }, 00:20:08.959 "secure_channel": false, 00:20:08.959 "sock_impl": "ssl" 00:20:08.959 } 00:20:08.959 } 00:20:08.959 ] 00:20:08.959 } 00:20:08.959 ] 00:20:08.959 }' 00:20:08.959 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:09.218 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:09.218 "subsystems": [ 00:20:09.218 { 00:20:09.218 "subsystem": "keyring", 00:20:09.218 "config": [ 00:20:09.218 { 00:20:09.218 "method": "keyring_file_add_key", 00:20:09.218 "params": { 00:20:09.218 "name": "key0", 00:20:09.218 "path": "/tmp/tmp.ASJZDZz3B0" 00:20:09.218 } 00:20:09.218 } 00:20:09.218 ] 00:20:09.218 }, 00:20:09.218 { 00:20:09.218 "subsystem": "iobuf", 00:20:09.218 "config": [ 00:20:09.218 { 00:20:09.218 "method": "iobuf_set_options", 00:20:09.218 "params": { 00:20:09.219 "small_pool_count": 8192, 00:20:09.219 "large_pool_count": 1024, 00:20:09.219 "small_bufsize": 8192, 00:20:09.219 "large_bufsize": 135168, 00:20:09.219 "enable_numa": false 00:20:09.219 } 00:20:09.219 } 00:20:09.219 ] 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "subsystem": "sock", 00:20:09.219 "config": [ 00:20:09.219 { 00:20:09.219 "method": "sock_set_default_impl", 00:20:09.219 "params": { 00:20:09.219 "impl_name": "posix" 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "sock_impl_set_options", 00:20:09.219 "params": { 00:20:09.219 "impl_name": "ssl", 00:20:09.219 "recv_buf_size": 4096, 00:20:09.219 "send_buf_size": 4096, 00:20:09.219 "enable_recv_pipe": true, 00:20:09.219 "enable_quickack": false, 00:20:09.219 "enable_placement_id": 0, 00:20:09.219 "enable_zerocopy_send_server": true, 00:20:09.219 "enable_zerocopy_send_client": false, 00:20:09.219 "zerocopy_threshold": 0, 00:20:09.219 "tls_version": 0, 00:20:09.219 "enable_ktls": false 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "sock_impl_set_options", 00:20:09.219 "params": { 00:20:09.219 "impl_name": "posix", 00:20:09.219 "recv_buf_size": 2097152, 00:20:09.219 "send_buf_size": 2097152, 00:20:09.219 "enable_recv_pipe": true, 00:20:09.219 "enable_quickack": false, 00:20:09.219 "enable_placement_id": 0, 00:20:09.219 "enable_zerocopy_send_server": true, 00:20:09.219 "enable_zerocopy_send_client": false, 00:20:09.219 "zerocopy_threshold": 0, 00:20:09.219 "tls_version": 0, 00:20:09.219 "enable_ktls": false 00:20:09.219 } 00:20:09.219 } 00:20:09.219 ] 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "subsystem": "vmd", 00:20:09.219 "config": [] 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "subsystem": "accel", 00:20:09.219 "config": [ 00:20:09.219 { 00:20:09.219 "method": "accel_set_options", 00:20:09.219 "params": { 00:20:09.219 "small_cache_size": 128, 00:20:09.219 "large_cache_size": 16, 00:20:09.219 "task_count": 2048, 00:20:09.219 "sequence_count": 2048, 00:20:09.219 "buf_count": 2048 00:20:09.219 } 00:20:09.219 } 00:20:09.219 ] 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "subsystem": "bdev", 00:20:09.219 "config": [ 00:20:09.219 { 00:20:09.219 "method": "bdev_set_options", 00:20:09.219 "params": { 00:20:09.219 "bdev_io_pool_size": 65535, 00:20:09.219 "bdev_io_cache_size": 256, 00:20:09.219 "bdev_auto_examine": true, 00:20:09.219 "iobuf_small_cache_size": 128, 00:20:09.219 "iobuf_large_cache_size": 16 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "bdev_raid_set_options", 00:20:09.219 "params": { 00:20:09.219 "process_window_size_kb": 1024, 00:20:09.219 "process_max_bandwidth_mb_sec": 0 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "bdev_iscsi_set_options", 00:20:09.219 "params": { 00:20:09.219 "timeout_sec": 30 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "bdev_nvme_set_options", 00:20:09.219 "params": { 00:20:09.219 "action_on_timeout": "none", 00:20:09.219 "timeout_us": 0, 00:20:09.219 "timeout_admin_us": 0, 00:20:09.219 "keep_alive_timeout_ms": 10000, 00:20:09.219 "arbitration_burst": 0, 00:20:09.219 "low_priority_weight": 0, 00:20:09.219 "medium_priority_weight": 0, 00:20:09.219 "high_priority_weight": 0, 00:20:09.219 "nvme_adminq_poll_period_us": 10000, 00:20:09.219 "nvme_ioq_poll_period_us": 0, 00:20:09.219 "io_queue_requests": 512, 00:20:09.219 "delay_cmd_submit": true, 00:20:09.219 "transport_retry_count": 4, 00:20:09.219 "bdev_retry_count": 3, 00:20:09.219 "transport_ack_timeout": 0, 00:20:09.219 "ctrlr_loss_timeout_sec": 0, 00:20:09.219 "reconnect_delay_sec": 0, 00:20:09.219 "fast_io_fail_timeout_sec": 0, 00:20:09.219 "disable_auto_failback": false, 00:20:09.219 "generate_uuids": false, 00:20:09.219 "transport_tos": 0, 00:20:09.219 "nvme_error_stat": false, 00:20:09.219 "rdma_srq_size": 0, 00:20:09.219 "io_path_stat": false, 00:20:09.219 "allow_accel_sequence": false, 00:20:09.219 "rdma_max_cq_size": 0, 00:20:09.219 "rdma_cm_event_timeout_ms": 0, 00:20:09.219 "dhchap_digests": [ 00:20:09.219 "sha256", 00:20:09.219 "sha384", 00:20:09.219 "sha512" 00:20:09.219 ], 00:20:09.219 "dhchap_dhgroups": [ 00:20:09.219 "null", 00:20:09.219 "ffdhe2048", 00:20:09.219 "ffdhe3072", 00:20:09.219 "ffdhe4096", 00:20:09.219 "ffdhe6144", 00:20:09.219 "ffdhe8192" 00:20:09.219 ] 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "bdev_nvme_attach_controller", 00:20:09.219 "params": { 00:20:09.219 "name": "nvme0", 00:20:09.219 "trtype": "TCP", 00:20:09.219 "adrfam": "IPv4", 00:20:09.219 "traddr": "10.0.0.2", 00:20:09.219 "trsvcid": "4420", 00:20:09.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.219 "prchk_reftag": false, 00:20:09.219 "prchk_guard": false, 00:20:09.219 "ctrlr_loss_timeout_sec": 0, 00:20:09.219 "reconnect_delay_sec": 0, 00:20:09.219 "fast_io_fail_timeout_sec": 0, 00:20:09.219 "psk": "key0", 00:20:09.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.219 "hdgst": false, 00:20:09.219 "ddgst": false, 00:20:09.219 "multipath": "multipath" 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "bdev_nvme_set_hotplug", 00:20:09.219 "params": { 00:20:09.219 "period_us": 100000, 00:20:09.219 "enable": false 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "bdev_enable_histogram", 00:20:09.219 "params": { 00:20:09.219 "name": "nvme0n1", 00:20:09.219 "enable": true 00:20:09.219 } 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "method": "bdev_wait_for_examine" 00:20:09.219 } 00:20:09.219 ] 00:20:09.219 }, 00:20:09.219 { 00:20:09.219 "subsystem": "nbd", 00:20:09.219 "config": [] 00:20:09.219 } 00:20:09.219 ] 00:20:09.219 }' 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3768494 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3768494 ']' 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3768494 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768494 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768494' 00:20:09.219 killing process with pid 3768494 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3768494 00:20:09.219 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.219 00:20:09.219 Latency(us) 00:20:09.219 [2024-11-26T18:21:32.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.219 [2024-11-26T18:21:32.333Z] =================================================================================================================== 00:20:09.219 [2024-11-26T18:21:32.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3768494 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3768420 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3768420 ']' 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3768420 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:09.219 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.220 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768420 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768420' 00:20:09.479 killing process with pid 3768420 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3768420 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3768420 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.479 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:09.479 "subsystems": [ 00:20:09.479 { 00:20:09.479 "subsystem": "keyring", 00:20:09.479 "config": [ 00:20:09.479 { 00:20:09.479 "method": "keyring_file_add_key", 00:20:09.479 "params": { 00:20:09.479 "name": "key0", 00:20:09.479 "path": "/tmp/tmp.ASJZDZz3B0" 00:20:09.479 } 00:20:09.479 } 00:20:09.479 ] 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "subsystem": "iobuf", 00:20:09.479 "config": [ 00:20:09.479 { 00:20:09.479 "method": "iobuf_set_options", 00:20:09.479 "params": { 00:20:09.479 "small_pool_count": 8192, 00:20:09.479 "large_pool_count": 1024, 00:20:09.479 "small_bufsize": 8192, 00:20:09.479 "large_bufsize": 135168, 00:20:09.479 "enable_numa": false 00:20:09.479 } 00:20:09.479 } 00:20:09.479 ] 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "subsystem": "sock", 00:20:09.479 "config": [ 00:20:09.479 { 00:20:09.479 "method": "sock_set_default_impl", 00:20:09.479 "params": { 00:20:09.479 "impl_name": "posix" 00:20:09.479 } 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "method": "sock_impl_set_options", 00:20:09.479 "params": { 00:20:09.479 "impl_name": "ssl", 00:20:09.479 "recv_buf_size": 4096, 00:20:09.479 "send_buf_size": 4096, 00:20:09.479 "enable_recv_pipe": true, 00:20:09.479 "enable_quickack": false, 00:20:09.479 "enable_placement_id": 0, 00:20:09.479 "enable_zerocopy_send_server": true, 00:20:09.479 "enable_zerocopy_send_client": false, 00:20:09.479 "zerocopy_threshold": 0, 00:20:09.479 "tls_version": 0, 00:20:09.479 "enable_ktls": false 00:20:09.479 } 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "method": "sock_impl_set_options", 00:20:09.479 "params": { 00:20:09.479 "impl_name": "posix", 00:20:09.479 "recv_buf_size": 2097152, 00:20:09.479 "send_buf_size": 2097152, 00:20:09.479 "enable_recv_pipe": true, 00:20:09.479 "enable_quickack": false, 00:20:09.479 "enable_placement_id": 0, 00:20:09.479 "enable_zerocopy_send_server": true, 00:20:09.479 "enable_zerocopy_send_client": false, 00:20:09.479 "zerocopy_threshold": 0, 00:20:09.479 "tls_version": 0, 00:20:09.479 "enable_ktls": false 00:20:09.479 } 00:20:09.479 } 00:20:09.479 ] 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "subsystem": "vmd", 00:20:09.479 "config": [] 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "subsystem": "accel", 00:20:09.479 "config": [ 00:20:09.479 { 00:20:09.479 "method": "accel_set_options", 00:20:09.479 "params": { 00:20:09.479 "small_cache_size": 128, 00:20:09.479 "large_cache_size": 16, 00:20:09.479 "task_count": 2048, 00:20:09.479 "sequence_count": 2048, 00:20:09.479 "buf_count": 2048 00:20:09.479 } 00:20:09.479 } 00:20:09.479 ] 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "subsystem": "bdev", 00:20:09.479 "config": [ 00:20:09.479 { 00:20:09.479 "method": "bdev_set_options", 00:20:09.479 "params": { 00:20:09.479 "bdev_io_pool_size": 65535, 00:20:09.479 "bdev_io_cache_size": 256, 00:20:09.479 "bdev_auto_examine": true, 00:20:09.479 "iobuf_small_cache_size": 128, 00:20:09.479 "iobuf_large_cache_size": 16 00:20:09.479 } 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "method": "bdev_raid_set_options", 00:20:09.479 "params": { 00:20:09.479 "process_window_size_kb": 1024, 00:20:09.479 "process_max_bandwidth_mb_sec": 0 00:20:09.479 } 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "method": "bdev_iscsi_set_options", 00:20:09.479 "params": { 00:20:09.479 "timeout_sec": 30 00:20:09.479 } 00:20:09.479 }, 00:20:09.479 { 00:20:09.479 "method": "bdev_nvme_set_options", 00:20:09.479 "params": { 00:20:09.479 "action_on_timeout": "none", 00:20:09.479 "timeout_us": 0, 00:20:09.479 "timeout_admin_us": 0, 00:20:09.479 "keep_alive_timeout_ms": 10000, 00:20:09.479 "arbitration_burst": 0, 00:20:09.479 "low_priority_weight": 0, 00:20:09.479 "medium_priority_weight": 0, 00:20:09.479 "high_priority_weight": 0, 00:20:09.479 "nvme_adminq_poll_period_us": 10000, 00:20:09.479 "nvme_ioq_poll_period_us": 0, 00:20:09.479 "io_queue_requests": 0, 00:20:09.479 "delay_cmd_submit": true, 00:20:09.479 "transport_retry_count": 4, 00:20:09.479 "bdev_retry_count": 3, 00:20:09.479 "transport_ack_timeout": 0, 00:20:09.479 "ctrlr_loss_timeout_sec": 0, 00:20:09.479 "reconnect_delay_sec": 0, 00:20:09.479 "fast_io_fail_timeout_sec": 0, 00:20:09.479 "disable_auto_failback": false, 00:20:09.479 "generate_uuids": false, 00:20:09.479 "transport_tos": 0, 00:20:09.479 "nvme_error_stat": false, 00:20:09.479 "rdma_srq_size": 0, 00:20:09.479 "io_path_stat": false, 00:20:09.479 "allow_accel_sequence": false, 00:20:09.479 "rdma_max_cq_size": 0, 00:20:09.479 "rdma_cm_event_timeout_ms": 0, 00:20:09.479 "dhchap_digests": [ 00:20:09.479 "sha256", 00:20:09.479 "sha384", 00:20:09.479 "sha512" 00:20:09.479 ], 00:20:09.479 "dhchap_dhgroups": [ 00:20:09.479 "null", 00:20:09.479 "ffdhe2048", 00:20:09.479 "ffdhe3072", 00:20:09.479 "ffdhe4096", 00:20:09.479 "ffdhe6144", 00:20:09.479 "ffdhe8192" 00:20:09.480 ] 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "bdev_nvme_set_hotplug", 00:20:09.480 "params": { 00:20:09.480 "period_us": 100000, 00:20:09.480 "enable": false 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "bdev_malloc_create", 00:20:09.480 "params": { 00:20:09.480 "name": "malloc0", 00:20:09.480 "num_blocks": 8192, 00:20:09.480 "block_size": 4096, 00:20:09.480 "physical_block_size": 4096, 00:20:09.480 "uuid": "c420ac90-6e8a-4e36-9407-b08501057e74", 00:20:09.480 "optimal_io_boundary": 0, 00:20:09.480 "md_size": 0, 00:20:09.480 "dif_type": 0, 00:20:09.480 "dif_is_head_of_md": false, 00:20:09.480 "dif_pi_format": 0 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "bdev_wait_for_examine" 00:20:09.480 } 00:20:09.480 ] 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "subsystem": "nbd", 00:20:09.480 "config": [] 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "subsystem": "scheduler", 00:20:09.480 "config": [ 00:20:09.480 { 00:20:09.480 "method": "framework_set_scheduler", 00:20:09.480 "params": { 00:20:09.480 "name": "static" 00:20:09.480 } 00:20:09.480 } 00:20:09.480 ] 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "subsystem": "nvmf", 00:20:09.480 "config": [ 00:20:09.480 { 00:20:09.480 "method": "nvmf_set_config", 00:20:09.480 "params": { 00:20:09.480 "discovery_filter": "match_any", 00:20:09.480 "admin_cmd_passthru": { 00:20:09.480 "identify_ctrlr": false 00:20:09.480 }, 00:20:09.480 "dhchap_digests": [ 00:20:09.480 "sha256", 00:20:09.480 "sha384", 00:20:09.480 "sha512" 00:20:09.480 ], 00:20:09.480 "dhchap_dhgroups": [ 00:20:09.480 "null", 00:20:09.480 "ffdhe2048", 00:20:09.480 "ffdhe3072", 00:20:09.480 "ffdhe4096", 00:20:09.480 "ffdhe6144", 00:20:09.480 "ffdhe8192" 00:20:09.480 ] 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "nvmf_set_max_subsystems", 00:20:09.480 "params": { 00:20:09.480 "max_subsystems": 1024 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "nvmf_set_crdt", 00:20:09.480 "params": { 00:20:09.480 "crdt1": 0, 00:20:09.480 "crdt2": 0, 00:20:09.480 "crdt3": 0 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "nvmf_create_transport", 00:20:09.480 "params": { 00:20:09.480 "trtype": "TCP", 00:20:09.480 "max_queue_depth": 128, 00:20:09.480 "max_io_qpairs_per_ctrlr": 127, 00:20:09.480 "in_capsule_data_size": 4096, 00:20:09.480 "max_io_size": 131072, 00:20:09.480 "io_unit_size": 131072, 00:20:09.480 "max_aq_depth": 128, 00:20:09.480 "num_shared_buffers": 511, 00:20:09.480 "buf_cache_size": 4294967295, 00:20:09.480 "dif_insert_or_strip": false, 00:20:09.480 "zcopy": false, 00:20:09.480 "c2h_success": false, 00:20:09.480 "sock_priority": 0, 00:20:09.480 "abort_timeout_sec": 1, 00:20:09.480 "ack_timeout": 0, 00:20:09.480 "data_wr_pool_size": 0 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "nvmf_create_subsystem", 00:20:09.480 "params": { 00:20:09.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.480 "allow_any_host": false, 00:20:09.480 "serial_number": "00000000000000000000", 00:20:09.480 "model_number": "SPDK bdev Controller", 00:20:09.480 "max_namespaces": 32, 00:20:09.480 "min_cntlid": 1, 00:20:09.480 "max_cntlid": 65519, 00:20:09.480 "ana_reporting": false 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "nvmf_subsystem_add_host", 00:20:09.480 "params": { 00:20:09.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.480 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.480 "psk": "key0" 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "nvmf_subsystem_add_ns", 00:20:09.480 "params": { 00:20:09.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.480 "namespace": { 00:20:09.480 "nsid": 1, 00:20:09.480 "bdev_name": "malloc0", 00:20:09.480 "nguid": "C420AC906E8A4E369407B08501057E74", 00:20:09.480 "uuid": "c420ac90-6e8a-4e36-9407-b08501057e74", 00:20:09.480 "no_auto_visible": false 00:20:09.480 } 00:20:09.480 } 00:20:09.480 }, 00:20:09.480 { 00:20:09.480 "method": "nvmf_subsystem_add_listener", 00:20:09.480 "params": { 00:20:09.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.480 "listen_address": { 00:20:09.480 "trtype": "TCP", 00:20:09.480 "adrfam": "IPv4", 00:20:09.480 "traddr": "10.0.0.2", 00:20:09.480 "trsvcid": "4420" 00:20:09.480 }, 00:20:09.480 "secure_channel": false, 00:20:09.480 "sock_impl": "ssl" 00:20:09.480 } 00:20:09.480 } 00:20:09.480 ] 00:20:09.480 } 00:20:09.480 ] 00:20:09.480 }' 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3768921 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3768921 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3768921 ']' 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.480 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.480 [2024-11-26 19:21:32.582803] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:09.480 [2024-11-26 19:21:32.582848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.739 [2024-11-26 19:21:32.648315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.739 [2024-11-26 19:21:32.686694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.739 [2024-11-26 19:21:32.686735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.739 [2024-11-26 19:21:32.686742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.739 [2024-11-26 19:21:32.686752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.739 [2024-11-26 19:21:32.686757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.739 [2024-11-26 19:21:32.687356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.998 [2024-11-26 19:21:32.901901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.998 [2024-11-26 19:21:32.933942] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.998 [2024-11-26 19:21:32.934177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3769163 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3769163 /var/tmp/bdevperf.sock 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3769163 ']' 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.565 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:10.565 "subsystems": [ 00:20:10.565 { 00:20:10.565 "subsystem": "keyring", 00:20:10.565 "config": [ 00:20:10.565 { 00:20:10.565 "method": "keyring_file_add_key", 00:20:10.565 "params": { 00:20:10.565 "name": "key0", 00:20:10.565 "path": "/tmp/tmp.ASJZDZz3B0" 00:20:10.565 } 00:20:10.565 } 00:20:10.565 ] 00:20:10.565 }, 00:20:10.565 { 00:20:10.565 "subsystem": "iobuf", 00:20:10.565 "config": [ 00:20:10.565 { 00:20:10.565 "method": "iobuf_set_options", 00:20:10.565 "params": { 00:20:10.566 "small_pool_count": 8192, 00:20:10.566 "large_pool_count": 1024, 00:20:10.566 "small_bufsize": 8192, 00:20:10.566 "large_bufsize": 135168, 00:20:10.566 "enable_numa": false 00:20:10.566 } 00:20:10.566 } 00:20:10.566 ] 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "subsystem": "sock", 00:20:10.566 "config": [ 00:20:10.566 { 00:20:10.566 "method": "sock_set_default_impl", 00:20:10.566 "params": { 00:20:10.566 "impl_name": "posix" 00:20:10.566 } 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "method": "sock_impl_set_options", 00:20:10.566 "params": { 00:20:10.566 "impl_name": "ssl", 00:20:10.566 "recv_buf_size": 4096, 00:20:10.566 "send_buf_size": 4096, 00:20:10.566 "enable_recv_pipe": true, 00:20:10.566 "enable_quickack": false, 00:20:10.566 "enable_placement_id": 0, 00:20:10.566 "enable_zerocopy_send_server": true, 00:20:10.566 "enable_zerocopy_send_client": false, 00:20:10.566 "zerocopy_threshold": 0, 00:20:10.566 "tls_version": 0, 00:20:10.566 "enable_ktls": false 00:20:10.566 } 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "method": "sock_impl_set_options", 00:20:10.566 "params": { 00:20:10.566 "impl_name": "posix", 00:20:10.566 "recv_buf_size": 2097152, 00:20:10.566 "send_buf_size": 2097152, 00:20:10.566 "enable_recv_pipe": true, 00:20:10.566 "enable_quickack": false, 00:20:10.566 "enable_placement_id": 0, 00:20:10.566 "enable_zerocopy_send_server": true, 00:20:10.566 "enable_zerocopy_send_client": false, 00:20:10.566 "zerocopy_threshold": 0, 00:20:10.566 "tls_version": 0, 00:20:10.566 "enable_ktls": false 00:20:10.566 } 00:20:10.566 } 00:20:10.566 ] 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "subsystem": "vmd", 00:20:10.566 "config": [] 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "subsystem": "accel", 00:20:10.566 "config": [ 00:20:10.566 { 00:20:10.566 "method": "accel_set_options", 00:20:10.566 "params": { 00:20:10.566 "small_cache_size": 128, 00:20:10.566 "large_cache_size": 16, 00:20:10.566 "task_count": 2048, 00:20:10.566 "sequence_count": 2048, 00:20:10.566 "buf_count": 2048 00:20:10.566 } 00:20:10.566 } 00:20:10.566 ] 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "subsystem": "bdev", 00:20:10.566 "config": [ 00:20:10.566 { 00:20:10.566 "method": "bdev_set_options", 00:20:10.566 "params": { 00:20:10.566 "bdev_io_pool_size": 65535, 00:20:10.566 "bdev_io_cache_size": 256, 00:20:10.566 "bdev_auto_examine": true, 00:20:10.566 "iobuf_small_cache_size": 128, 00:20:10.566 "iobuf_large_cache_size": 16 00:20:10.566 } 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "method": "bdev_raid_set_options", 00:20:10.566 "params": { 00:20:10.566 "process_window_size_kb": 1024, 00:20:10.566 "process_max_bandwidth_mb_sec": 0 00:20:10.566 } 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "method": "bdev_iscsi_set_options", 00:20:10.566 "params": { 00:20:10.566 "timeout_sec": 30 00:20:10.566 } 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "method": "bdev_nvme_set_options", 00:20:10.566 "params": { 00:20:10.566 "action_on_timeout": "none", 00:20:10.566 "timeout_us": 0, 00:20:10.566 "timeout_admin_us": 0, 00:20:10.566 "keep_alive_timeout_ms": 10000, 00:20:10.566 "arbitration_burst": 0, 00:20:10.566 "low_priority_weight": 0, 00:20:10.566 "medium_priority_weight": 0, 00:20:10.566 "high_priority_weight": 0, 00:20:10.566 "nvme_adminq_poll_period_us": 10000, 00:20:10.566 "nvme_ioq_poll_period_us": 0, 00:20:10.566 "io_queue_requests": 512, 00:20:10.566 "delay_cmd_submit": true, 00:20:10.566 "transport_retry_count": 4, 00:20:10.566 "bdev_retry_count": 3, 00:20:10.566 "transport_ack_timeout": 0, 00:20:10.566 "ctrlr_loss_timeout_sec": 0, 00:20:10.566 "reconnect_delay_sec": 0, 00:20:10.566 "fast_io_fail_timeout_sec": 0, 00:20:10.566 "disable_auto_failback": false, 00:20:10.566 "generate_uuids": false, 00:20:10.566 "transport_tos": 0, 00:20:10.566 "nvme_error_stat": false, 00:20:10.566 "rdma_srq_size": 0, 00:20:10.566 "io_path_stat": false, 00:20:10.566 "allow_accel_sequence": false, 00:20:10.566 "rdma_max_cq_size": 0, 00:20:10.566 "rdma_cm_event_timeout_ms": 0, 00:20:10.566 "dhchap_digests": [ 00:20:10.566 "sha256", 00:20:10.566 "sha384", 00:20:10.566 "sha512" 00:20:10.566 ], 00:20:10.566 "dhchap_dhgroups": [ 00:20:10.566 "null", 00:20:10.566 "ffdhe2048", 00:20:10.566 "ffdhe3072", 00:20:10.566 "ffdhe4096", 00:20:10.566 "ffdhe6144", 00:20:10.566 "ffdhe8192" 00:20:10.566 ] 00:20:10.566 } 00:20:10.566 }, 00:20:10.566 { 00:20:10.566 "method": "bdev_nvme_attach_controller", 00:20:10.566 "params": { 00:20:10.566 "name": "nvme0", 00:20:10.566 "trtype": "TCP", 00:20:10.566 "adrfam": "IPv4", 00:20:10.566 "traddr": "10.0.0.2", 00:20:10.566 "trsvcid": "4420", 00:20:10.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.566 "prchk_reftag": false, 00:20:10.566 "prchk_guard": false, 00:20:10.566 "ctrlr_loss_timeout_sec": 0, 00:20:10.566 "reconnect_delay_sec": 0, 00:20:10.566 "fast_io_fail_timeout_sec": 0, 00:20:10.566 "psk": "key0", 00:20:10.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.566 "hdgst": false, 00:20:10.566 "ddgst": false, 00:20:10.566 "multipath": "multipath" 00:20:10.566 } 00:20:10.567 }, 00:20:10.567 { 00:20:10.567 "method": "bdev_nvme_set_hotplug", 00:20:10.567 "params": { 00:20:10.567 "period_us": 100000, 00:20:10.567 "enable": false 00:20:10.567 } 00:20:10.567 }, 00:20:10.567 { 00:20:10.567 "method": "bdev_enable_histogram", 00:20:10.567 "params": { 00:20:10.567 "name": "nvme0n1", 00:20:10.567 "enable": true 00:20:10.567 } 00:20:10.567 }, 00:20:10.567 { 00:20:10.567 "method": "bdev_wait_for_examine" 00:20:10.567 } 00:20:10.567 ] 00:20:10.567 }, 00:20:10.567 { 00:20:10.567 "subsystem": "nbd", 00:20:10.567 "config": [] 00:20:10.567 } 00:20:10.567 ] 00:20:10.567 }' 00:20:10.567 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.567 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.567 [2024-11-26 19:21:33.515021] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:10.567 [2024-11-26 19:21:33.515068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769163 ] 00:20:10.567 [2024-11-26 19:21:33.588918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.567 [2024-11-26 19:21:33.629281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.826 [2024-11-26 19:21:33.783137] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.393 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.393 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:11.393 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:11.393 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:11.652 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.652 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.652 Running I/O for 1 seconds... 00:20:12.589 5388.00 IOPS, 21.05 MiB/s 00:20:12.589 Latency(us) 00:20:12.589 [2024-11-26T18:21:35.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.589 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.589 Verification LBA range: start 0x0 length 0x2000 00:20:12.589 nvme0n1 : 1.04 5320.12 20.78 0.00 0.00 23725.69 6553.60 35701.52 00:20:12.589 [2024-11-26T18:21:35.703Z] =================================================================================================================== 00:20:12.589 [2024-11-26T18:21:35.703Z] Total : 5320.12 20.78 0.00 0.00 23725.69 6553.60 35701.52 00:20:12.589 { 00:20:12.589 "results": [ 00:20:12.589 { 00:20:12.589 "job": "nvme0n1", 00:20:12.589 "core_mask": "0x2", 00:20:12.589 "workload": "verify", 00:20:12.589 "status": "finished", 00:20:12.589 "verify_range": { 00:20:12.589 "start": 0, 00:20:12.589 "length": 8192 00:20:12.589 }, 00:20:12.589 "queue_depth": 128, 00:20:12.589 "io_size": 4096, 00:20:12.589 "runtime": 1.036819, 00:20:12.589 "iops": 5320.118554926174, 00:20:12.589 "mibps": 20.781713105180366, 00:20:12.589 "io_failed": 0, 00:20:12.589 "io_timeout": 0, 00:20:12.589 "avg_latency_us": 23725.685732242135, 00:20:12.589 "min_latency_us": 6553.6, 00:20:12.589 "max_latency_us": 35701.51619047619 00:20:12.589 } 00:20:12.589 ], 00:20:12.589 "core_count": 1 00:20:12.589 } 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:12.849 nvmf_trace.0 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3769163 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3769163 ']' 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3769163 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3769163 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3769163' 00:20:12.849 killing process with pid 3769163 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3769163 00:20:12.849 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.849 00:20:12.849 Latency(us) 00:20:12.849 [2024-11-26T18:21:35.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.849 [2024-11-26T18:21:35.963Z] =================================================================================================================== 00:20:12.849 [2024-11-26T18:21:35.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.849 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3769163 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:13.108 rmmod nvme_tcp 00:20:13.108 rmmod nvme_fabrics 00:20:13.108 rmmod nvme_keyring 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3768921 ']' 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3768921 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3768921 ']' 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3768921 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768921 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768921' 00:20:13.108 killing process with pid 3768921 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3768921 00:20:13.108 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3768921 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.367 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.273 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:15.273 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aaiUOCGjhN /tmp/tmp.nL6JRdn6hj /tmp/tmp.ASJZDZz3B0 00:20:15.273 00:20:15.273 real 1m20.027s 00:20:15.273 user 2m2.611s 00:20:15.273 sys 0m30.254s 00:20:15.273 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.273 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.273 ************************************ 00:20:15.273 END TEST nvmf_tls 00:20:15.273 ************************************ 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:15.533 ************************************ 00:20:15.533 START TEST nvmf_fips 00:20:15.533 ************************************ 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.533 * Looking for test storage... 00:20:15.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:15.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.533 --rc genhtml_branch_coverage=1 00:20:15.533 --rc genhtml_function_coverage=1 00:20:15.533 --rc genhtml_legend=1 00:20:15.533 --rc geninfo_all_blocks=1 00:20:15.533 --rc geninfo_unexecuted_blocks=1 00:20:15.533 00:20:15.533 ' 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:15.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.533 --rc genhtml_branch_coverage=1 00:20:15.533 --rc genhtml_function_coverage=1 00:20:15.533 --rc genhtml_legend=1 00:20:15.533 --rc geninfo_all_blocks=1 00:20:15.533 --rc geninfo_unexecuted_blocks=1 00:20:15.533 00:20:15.533 ' 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:15.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.533 --rc genhtml_branch_coverage=1 00:20:15.533 --rc genhtml_function_coverage=1 00:20:15.533 --rc genhtml_legend=1 00:20:15.533 --rc geninfo_all_blocks=1 00:20:15.533 --rc geninfo_unexecuted_blocks=1 00:20:15.533 00:20:15.533 ' 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:15.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.533 --rc genhtml_branch_coverage=1 00:20:15.533 --rc genhtml_function_coverage=1 00:20:15.533 --rc genhtml_legend=1 00:20:15.533 --rc geninfo_all_blocks=1 00:20:15.533 --rc geninfo_unexecuted_blocks=1 00:20:15.533 00:20:15.533 ' 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:15.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:15.534 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:15.794 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:15.795 Error setting digest 00:20:15.795 401233CD027F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:15.795 401233CD027F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.795 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:22.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:22.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:22.365 Found net devices under 0000:86:00.0: cvl_0_0 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:22.365 Found net devices under 0000:86:00.1: cvl_0_1 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:22.365 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:22.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:20:22.366 00:20:22.366 --- 10.0.0.2 ping statistics --- 00:20:22.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.366 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:20:22.366 00:20:22.366 --- 10.0.0.1 ping statistics --- 00:20:22.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.366 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3773184 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3773184 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3773184 ']' 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.366 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.366 [2024-11-26 19:21:44.868371] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:22.366 [2024-11-26 19:21:44.868415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.366 [2024-11-26 19:21:44.948963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.366 [2024-11-26 19:21:44.989845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.366 [2024-11-26 19:21:44.989881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.366 [2024-11-26 19:21:44.989891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.366 [2024-11-26 19:21:44.989897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.366 [2024-11-26 19:21:44.989902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.366 [2024-11-26 19:21:44.990456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.625 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.625 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:22.625 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.625 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.625 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.625 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.625 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:22.625 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:22.884 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:22.884 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.480 00:20:22.884 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:22.884 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.480 00:20:22.884 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.480 00:20:22.884 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.480 00:20:22.884 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.884 [2024-11-26 19:21:45.920901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.884 [2024-11-26 19:21:45.936912] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.884 [2024-11-26 19:21:45.937129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.884 malloc0 00:20:23.143 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.143 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3773436 00:20:23.143 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.144 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3773436 /var/tmp/bdevperf.sock 00:20:23.144 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3773436 ']' 00:20:23.144 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.144 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.144 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.144 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.144 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.144 [2024-11-26 19:21:46.066351] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:23.144 [2024-11-26 19:21:46.066398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773436 ] 00:20:23.144 [2024-11-26 19:21:46.142232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.144 [2024-11-26 19:21:46.182057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.080 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.080 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:24.080 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.480 00:20:24.080 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.338 [2024-11-26 19:21:47.268050] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.339 TLSTESTn1 00:20:24.339 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:24.598 Running I/O for 10 seconds... 00:20:26.471 5294.00 IOPS, 20.68 MiB/s [2024-11-26T18:21:50.522Z] 5458.00 IOPS, 21.32 MiB/s [2024-11-26T18:21:51.899Z] 5474.67 IOPS, 21.39 MiB/s [2024-11-26T18:21:52.835Z] 5494.00 IOPS, 21.46 MiB/s [2024-11-26T18:21:53.772Z] 5415.40 IOPS, 21.15 MiB/s [2024-11-26T18:21:54.710Z] 5337.17 IOPS, 20.85 MiB/s [2024-11-26T18:21:55.646Z] 5264.43 IOPS, 20.56 MiB/s [2024-11-26T18:21:56.583Z] 5215.50 IOPS, 20.37 MiB/s [2024-11-26T18:21:57.521Z] 5187.78 IOPS, 20.26 MiB/s [2024-11-26T18:21:57.521Z] 5154.50 IOPS, 20.13 MiB/s 00:20:34.407 Latency(us) 00:20:34.407 [2024-11-26T18:21:57.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.407 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.407 Verification LBA range: start 0x0 length 0x2000 00:20:34.407 TLSTESTn1 : 10.02 5158.37 20.15 0.00 0.00 24777.71 6085.49 32455.92 00:20:34.407 [2024-11-26T18:21:57.521Z] =================================================================================================================== 00:20:34.407 [2024-11-26T18:21:57.521Z] Total : 5158.37 20.15 0.00 0.00 24777.71 6085.49 32455.92 00:20:34.407 { 00:20:34.407 "results": [ 00:20:34.407 { 00:20:34.407 "job": "TLSTESTn1", 00:20:34.407 "core_mask": "0x4", 00:20:34.407 "workload": "verify", 00:20:34.407 "status": "finished", 00:20:34.407 "verify_range": { 00:20:34.407 "start": 0, 00:20:34.407 "length": 8192 00:20:34.407 }, 00:20:34.407 "queue_depth": 128, 00:20:34.407 "io_size": 4096, 00:20:34.407 "runtime": 10.017311, 00:20:34.407 "iops": 5158.370345095605, 00:20:34.407 "mibps": 20.14988416052971, 00:20:34.407 "io_failed": 0, 00:20:34.407 "io_timeout": 0, 00:20:34.407 "avg_latency_us": 24777.707464946692, 00:20:34.407 "min_latency_us": 6085.4857142857145, 00:20:34.407 "max_latency_us": 32455.92380952381 00:20:34.407 } 00:20:34.407 ], 00:20:34.407 "core_count": 1 00:20:34.407 } 00:20:34.407 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:34.407 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:34.407 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:34.407 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:34.407 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:34.666 nvmf_trace.0 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3773436 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3773436 ']' 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3773436 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3773436 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3773436' 00:20:34.666 killing process with pid 3773436 00:20:34.666 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3773436 00:20:34.666 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.666 00:20:34.666 Latency(us) 00:20:34.666 [2024-11-26T18:21:57.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.666 [2024-11-26T18:21:57.781Z] =================================================================================================================== 00:20:34.667 [2024-11-26T18:21:57.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.667 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3773436 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.926 rmmod nvme_tcp 00:20:34.926 rmmod nvme_fabrics 00:20:34.926 rmmod nvme_keyring 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3773184 ']' 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3773184 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3773184 ']' 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3773184 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3773184 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3773184' 00:20:34.926 killing process with pid 3773184 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3773184 00:20:34.926 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3773184 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.185 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.090 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:37.090 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.480 00:20:37.090 00:20:37.090 real 0m21.729s 00:20:37.090 user 0m22.937s 00:20:37.090 sys 0m10.216s 00:20:37.090 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.090 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:37.090 ************************************ 00:20:37.090 END TEST nvmf_fips 00:20:37.090 ************************************ 00:20:37.090 19:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:37.090 19:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:37.090 19:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.090 19:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:37.349 ************************************ 00:20:37.349 START TEST nvmf_control_msg_list 00:20:37.349 ************************************ 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:37.349 * Looking for test storage... 00:20:37.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.349 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:37.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.350 --rc genhtml_branch_coverage=1 00:20:37.350 --rc genhtml_function_coverage=1 00:20:37.350 --rc genhtml_legend=1 00:20:37.350 --rc geninfo_all_blocks=1 00:20:37.350 --rc geninfo_unexecuted_blocks=1 00:20:37.350 00:20:37.350 ' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:37.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.350 --rc genhtml_branch_coverage=1 00:20:37.350 --rc genhtml_function_coverage=1 00:20:37.350 --rc genhtml_legend=1 00:20:37.350 --rc geninfo_all_blocks=1 00:20:37.350 --rc geninfo_unexecuted_blocks=1 00:20:37.350 00:20:37.350 ' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:37.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.350 --rc genhtml_branch_coverage=1 00:20:37.350 --rc genhtml_function_coverage=1 00:20:37.350 --rc genhtml_legend=1 00:20:37.350 --rc geninfo_all_blocks=1 00:20:37.350 --rc geninfo_unexecuted_blocks=1 00:20:37.350 00:20:37.350 ' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:37.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.350 --rc genhtml_branch_coverage=1 00:20:37.350 --rc genhtml_function_coverage=1 00:20:37.350 --rc genhtml_legend=1 00:20:37.350 --rc geninfo_all_blocks=1 00:20:37.350 --rc geninfo_unexecuted_blocks=1 00:20:37.350 00:20:37.350 ' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:37.350 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:43.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:43.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.926 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:43.927 Found net devices under 0000:86:00.0: cvl_0_0 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:43.927 Found net devices under 0000:86:00.1: cvl_0_1 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.927 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:20:43.927 00:20:43.927 --- 10.0.0.2 ping statistics --- 00:20:43.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.927 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:20:43.927 00:20:43.927 --- 10.0.0.1 ping statistics --- 00:20:43.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.927 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3779460 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3779460 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3779460 ']' 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.927 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:43.927 [2024-11-26 19:22:06.350444] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:43.927 [2024-11-26 19:22:06.350487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.927 [2024-11-26 19:22:06.427858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.927 [2024-11-26 19:22:06.466639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.927 [2024-11-26 19:22:06.466678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.927 [2024-11-26 19:22:06.466684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.927 [2024-11-26 19:22:06.466690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.927 [2024-11-26 19:22:06.466695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.927 [2024-11-26 19:22:06.467278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 [2024-11-26 19:22:07.222267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 Malloc0 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 [2024-11-26 19:22:07.266552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3779782 00:20:44.186 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.187 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3779783 00:20:44.187 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.187 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3779784 00:20:44.187 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3779782 00:20:44.187 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.444 [2024-11-26 19:22:07.344962] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:44.444 [2024-11-26 19:22:07.355295] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:44.445 [2024-11-26 19:22:07.355447] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.378 Initializing NVMe Controllers 00:20:45.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:45.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:45.378 Initialization complete. Launching workers. 00:20:45.378 ======================================================== 00:20:45.378 Latency(us) 00:20:45.378 Device Information : IOPS MiB/s Average min max 00:20:45.379 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6186.98 24.17 161.29 131.53 360.65 00:20:45.379 ======================================================== 00:20:45.379 Total : 6186.98 24.17 161.29 131.53 360.65 00:20:45.379 00:20:45.379 Initializing NVMe Controllers 00:20:45.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:45.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:45.379 Initialization complete. Launching workers. 00:20:45.379 ======================================================== 00:20:45.379 Latency(us) 00:20:45.379 Device Information : IOPS MiB/s Average min max 00:20:45.379 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40969.86 40786.98 41972.34 00:20:45.379 ======================================================== 00:20:45.379 Total : 25.00 0.10 40969.86 40786.98 41972.34 00:20:45.379 00:20:45.636 Initializing NVMe Controllers 00:20:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:45.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:45.636 Initialization complete. Launching workers. 00:20:45.636 ======================================================== 00:20:45.636 Latency(us) 00:20:45.636 Device Information : IOPS MiB/s Average min max 00:20:45.636 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6415.00 25.06 155.53 122.35 351.76 00:20:45.636 ======================================================== 00:20:45.636 Total : 6415.00 25.06 155.53 122.35 351.76 00:20:45.636 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3779783 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3779784 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.636 rmmod nvme_tcp 00:20:45.636 rmmod nvme_fabrics 00:20:45.636 rmmod nvme_keyring 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3779460 ']' 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3779460 00:20:45.636 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3779460 ']' 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3779460 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3779460 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3779460' 00:20:45.637 killing process with pid 3779460 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3779460 00:20:45.637 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3779460 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:45.895 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.896 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.896 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.798 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:47.798 00:20:47.798 real 0m10.676s 00:20:47.798 user 0m7.256s 00:20:47.798 sys 0m5.506s 00:20:47.798 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.798 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:47.798 ************************************ 00:20:47.798 END TEST nvmf_control_msg_list 00:20:47.798 ************************************ 00:20:48.057 19:22:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:48.057 19:22:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.057 19:22:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.057 19:22:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.057 ************************************ 00:20:48.057 START TEST nvmf_wait_for_buf 00:20:48.057 ************************************ 00:20:48.057 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:48.057 * Looking for test storage... 00:20:48.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:48.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.057 --rc genhtml_branch_coverage=1 00:20:48.057 --rc genhtml_function_coverage=1 00:20:48.057 --rc genhtml_legend=1 00:20:48.057 --rc geninfo_all_blocks=1 00:20:48.057 --rc geninfo_unexecuted_blocks=1 00:20:48.057 00:20:48.057 ' 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:48.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.057 --rc genhtml_branch_coverage=1 00:20:48.057 --rc genhtml_function_coverage=1 00:20:48.057 --rc genhtml_legend=1 00:20:48.057 --rc geninfo_all_blocks=1 00:20:48.057 --rc geninfo_unexecuted_blocks=1 00:20:48.057 00:20:48.057 ' 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:48.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.057 --rc genhtml_branch_coverage=1 00:20:48.057 --rc genhtml_function_coverage=1 00:20:48.057 --rc genhtml_legend=1 00:20:48.057 --rc geninfo_all_blocks=1 00:20:48.057 --rc geninfo_unexecuted_blocks=1 00:20:48.057 00:20:48.057 ' 00:20:48.057 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:48.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.057 --rc genhtml_branch_coverage=1 00:20:48.057 --rc genhtml_function_coverage=1 00:20:48.058 --rc genhtml_legend=1 00:20:48.058 --rc geninfo_all_blocks=1 00:20:48.058 --rc geninfo_unexecuted_blocks=1 00:20:48.058 00:20:48.058 ' 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:48.058 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.317 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.318 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:54.887 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:54.887 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:54.888 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:54.888 Found net devices under 0000:86:00.0: cvl_0_0 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:54.888 Found net devices under 0000:86:00.1: cvl_0_1 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:54.888 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:54.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:20:54.888 00:20:54.888 --- 10.0.0.2 ping statistics --- 00:20:54.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.888 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:20:54.888 00:20:54.888 --- 10.0.0.1 ping statistics --- 00:20:54.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.888 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3783491 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3783491 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3783491 ']' 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.888 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.888 [2024-11-26 19:22:17.179054] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:20:54.888 [2024-11-26 19:22:17.179110] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.889 [2024-11-26 19:22:17.240873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.889 [2024-11-26 19:22:17.283216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.889 [2024-11-26 19:22:17.283251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.889 [2024-11-26 19:22:17.283258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.889 [2024-11-26 19:22:17.283264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.889 [2024-11-26 19:22:17.283269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.889 [2024-11-26 19:22:17.283831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 Malloc0 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 [2024-11-26 19:22:17.470777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.889 [2024-11-26 19:22:17.498992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.889 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:54.889 [2024-11-26 19:22:17.582179] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:56.265 Initializing NVMe Controllers 00:20:56.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:56.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:56.265 Initialization complete. Launching workers. 00:20:56.265 ======================================================== 00:20:56.265 Latency(us) 00:20:56.266 Device Information : IOPS MiB/s Average min max 00:20:56.266 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33538.17 29346.79 71044.75 00:20:56.266 ======================================================== 00:20:56.266 Total : 124.00 15.50 33538.17 29346.79 71044.75 00:20:56.266 00:20:56.266 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:56.266 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:56.266 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.266 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.266 rmmod nvme_tcp 00:20:56.266 rmmod nvme_fabrics 00:20:56.266 rmmod nvme_keyring 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3783491 ']' 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3783491 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3783491 ']' 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3783491 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3783491 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3783491' 00:20:56.266 killing process with pid 3783491 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3783491 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3783491 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.266 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:58.799 00:20:58.799 real 0m10.404s 00:20:58.799 user 0m3.917s 00:20:58.799 sys 0m4.928s 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:58.799 ************************************ 00:20:58.799 END TEST nvmf_wait_for_buf 00:20:58.799 ************************************ 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.799 19:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.071 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.071 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.071 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.071 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:04.072 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:04.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:04.072 Found net devices under 0000:86:00.0: cvl_0_0 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:04.072 Found net devices under 0000:86:00.1: cvl_0_1 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.072 ************************************ 00:21:04.072 START TEST nvmf_perf_adq 00:21:04.072 ************************************ 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:04.072 * Looking for test storage... 00:21:04.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:04.072 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:04.331 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:04.331 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.331 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.331 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:04.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.332 --rc genhtml_branch_coverage=1 00:21:04.332 --rc genhtml_function_coverage=1 00:21:04.332 --rc genhtml_legend=1 00:21:04.332 --rc geninfo_all_blocks=1 00:21:04.332 --rc geninfo_unexecuted_blocks=1 00:21:04.332 00:21:04.332 ' 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:04.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.332 --rc genhtml_branch_coverage=1 00:21:04.332 --rc genhtml_function_coverage=1 00:21:04.332 --rc genhtml_legend=1 00:21:04.332 --rc geninfo_all_blocks=1 00:21:04.332 --rc geninfo_unexecuted_blocks=1 00:21:04.332 00:21:04.332 ' 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:04.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.332 --rc genhtml_branch_coverage=1 00:21:04.332 --rc genhtml_function_coverage=1 00:21:04.332 --rc genhtml_legend=1 00:21:04.332 --rc geninfo_all_blocks=1 00:21:04.332 --rc geninfo_unexecuted_blocks=1 00:21:04.332 00:21:04.332 ' 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:04.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.332 --rc genhtml_branch_coverage=1 00:21:04.332 --rc genhtml_function_coverage=1 00:21:04.332 --rc genhtml_legend=1 00:21:04.332 --rc geninfo_all_blocks=1 00:21:04.332 --rc geninfo_unexecuted_blocks=1 00:21:04.332 00:21:04.332 ' 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.332 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.333 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.911 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.912 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.912 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.912 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.912 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:10.912 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:11.171 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:13.075 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:18.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.349 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:18.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:18.350 Found net devices under 0000:86:00.0: cvl_0_0 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:18.350 Found net devices under 0000:86:00.1: cvl_0_1 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:21:18.350 00:21:18.350 --- 10.0.0.2 ping statistics --- 00:21:18.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.350 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:21:18.350 00:21:18.350 --- 10.0.0.1 ping statistics --- 00:21:18.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.350 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3792070 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3792070 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3792070 ']' 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.350 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.609 [2024-11-26 19:22:41.497418] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:21:18.610 [2024-11-26 19:22:41.497461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.610 [2024-11-26 19:22:41.560165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.610 [2024-11-26 19:22:41.604532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.610 [2024-11-26 19:22:41.604571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.610 [2024-11-26 19:22:41.604578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.610 [2024-11-26 19:22:41.604583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.610 [2024-11-26 19:22:41.604589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.610 [2024-11-26 19:22:41.606031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.610 [2024-11-26 19:22:41.606137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.610 [2024-11-26 19:22:41.606247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.610 [2024-11-26 19:22:41.606247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.610 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.869 [2024-11-26 19:22:41.840741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.869 Malloc1 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.869 [2024-11-26 19:22:41.900508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3792199 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:18.869 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:21.396 "tick_rate": 2100000000, 00:21:21.396 "poll_groups": [ 00:21:21.396 { 00:21:21.396 "name": "nvmf_tgt_poll_group_000", 00:21:21.396 "admin_qpairs": 1, 00:21:21.396 "io_qpairs": 1, 00:21:21.396 "current_admin_qpairs": 1, 00:21:21.396 "current_io_qpairs": 1, 00:21:21.396 "pending_bdev_io": 0, 00:21:21.396 "completed_nvme_io": 20106, 00:21:21.396 "transports": [ 00:21:21.396 { 00:21:21.396 "trtype": "TCP" 00:21:21.396 } 00:21:21.396 ] 00:21:21.396 }, 00:21:21.396 { 00:21:21.396 "name": "nvmf_tgt_poll_group_001", 00:21:21.396 "admin_qpairs": 0, 00:21:21.396 "io_qpairs": 1, 00:21:21.396 "current_admin_qpairs": 0, 00:21:21.396 "current_io_qpairs": 1, 00:21:21.396 "pending_bdev_io": 0, 00:21:21.396 "completed_nvme_io": 20556, 00:21:21.396 "transports": [ 00:21:21.396 { 00:21:21.396 "trtype": "TCP" 00:21:21.396 } 00:21:21.396 ] 00:21:21.396 }, 00:21:21.396 { 00:21:21.396 "name": "nvmf_tgt_poll_group_002", 00:21:21.396 "admin_qpairs": 0, 00:21:21.396 "io_qpairs": 1, 00:21:21.396 "current_admin_qpairs": 0, 00:21:21.396 "current_io_qpairs": 1, 00:21:21.396 "pending_bdev_io": 0, 00:21:21.396 "completed_nvme_io": 20253, 00:21:21.396 "transports": [ 00:21:21.396 { 00:21:21.396 "trtype": "TCP" 00:21:21.396 } 00:21:21.396 ] 00:21:21.396 }, 00:21:21.396 { 00:21:21.396 "name": "nvmf_tgt_poll_group_003", 00:21:21.396 "admin_qpairs": 0, 00:21:21.396 "io_qpairs": 1, 00:21:21.396 "current_admin_qpairs": 0, 00:21:21.396 "current_io_qpairs": 1, 00:21:21.396 "pending_bdev_io": 0, 00:21:21.396 "completed_nvme_io": 19978, 00:21:21.396 "transports": [ 00:21:21.396 { 00:21:21.396 "trtype": "TCP" 00:21:21.396 } 00:21:21.396 ] 00:21:21.396 } 00:21:21.396 ] 00:21:21.396 }' 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:21.396 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3792199 00:21:29.504 Initializing NVMe Controllers 00:21:29.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:29.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:29.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:29.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:29.504 Initialization complete. Launching workers. 00:21:29.504 ======================================================== 00:21:29.505 Latency(us) 00:21:29.505 Device Information : IOPS MiB/s Average min max 00:21:29.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10643.50 41.58 6014.33 2276.43 9943.00 00:21:29.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10973.30 42.86 5832.54 2001.01 12188.75 00:21:29.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10785.10 42.13 5935.23 1824.74 12411.93 00:21:29.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10756.40 42.02 5950.75 2255.13 11325.38 00:21:29.505 ======================================================== 00:21:29.505 Total : 43158.29 168.59 5932.50 1824.74 12411.93 00:21:29.505 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:29.505 rmmod nvme_tcp 00:21:29.505 rmmod nvme_fabrics 00:21:29.505 rmmod nvme_keyring 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3792070 ']' 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3792070 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3792070 ']' 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3792070 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792070 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792070' 00:21:29.505 killing process with pid 3792070 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3792070 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3792070 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.505 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.412 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.412 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:31.412 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:31.412 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:32.791 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:34.697 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:39.967 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:39.967 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:39.967 Found net devices under 0000:86:00.0: cvl_0_0 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:39.967 Found net devices under 0000:86:00.1: cvl_0_1 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.967 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:21:39.968 00:21:39.968 --- 10.0.0.2 ping statistics --- 00:21:39.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.968 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:21:39.968 00:21:39.968 --- 10.0.0.1 ping statistics --- 00:21:39.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.968 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:39.968 net.core.busy_poll = 1 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:39.968 net.core.busy_read = 1 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:39.968 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:39.968 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:39.968 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:39.968 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:40.226 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3795969 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3795969 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3795969 ']' 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.227 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.227 [2024-11-26 19:23:03.146339] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:21:40.227 [2024-11-26 19:23:03.146387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.227 [2024-11-26 19:23:03.227088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.227 [2024-11-26 19:23:03.267749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.227 [2024-11-26 19:23:03.267787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.227 [2024-11-26 19:23:03.267794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.227 [2024-11-26 19:23:03.267799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.227 [2024-11-26 19:23:03.267804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.227 [2024-11-26 19:23:03.269388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.227 [2024-11-26 19:23:03.269500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.227 [2024-11-26 19:23:03.269606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.227 [2024-11-26 19:23:03.269607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.169 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.169 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:41.169 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.169 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.169 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 [2024-11-26 19:23:04.141121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 Malloc1 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.169 [2024-11-26 19:23:04.206815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3796079 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:41.169 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:43.200 "tick_rate": 2100000000, 00:21:43.200 "poll_groups": [ 00:21:43.200 { 00:21:43.200 "name": "nvmf_tgt_poll_group_000", 00:21:43.200 "admin_qpairs": 1, 00:21:43.200 "io_qpairs": 0, 00:21:43.200 "current_admin_qpairs": 1, 00:21:43.200 "current_io_qpairs": 0, 00:21:43.200 "pending_bdev_io": 0, 00:21:43.200 "completed_nvme_io": 0, 00:21:43.200 "transports": [ 00:21:43.200 { 00:21:43.200 "trtype": "TCP" 00:21:43.200 } 00:21:43.200 ] 00:21:43.200 }, 00:21:43.200 { 00:21:43.200 "name": "nvmf_tgt_poll_group_001", 00:21:43.200 "admin_qpairs": 0, 00:21:43.200 "io_qpairs": 4, 00:21:43.200 "current_admin_qpairs": 0, 00:21:43.200 "current_io_qpairs": 4, 00:21:43.200 "pending_bdev_io": 0, 00:21:43.200 "completed_nvme_io": 43927, 00:21:43.200 "transports": [ 00:21:43.200 { 00:21:43.200 "trtype": "TCP" 00:21:43.200 } 00:21:43.200 ] 00:21:43.200 }, 00:21:43.200 { 00:21:43.200 "name": "nvmf_tgt_poll_group_002", 00:21:43.200 "admin_qpairs": 0, 00:21:43.200 "io_qpairs": 0, 00:21:43.200 "current_admin_qpairs": 0, 00:21:43.200 "current_io_qpairs": 0, 00:21:43.200 "pending_bdev_io": 0, 00:21:43.200 "completed_nvme_io": 0, 00:21:43.200 "transports": [ 00:21:43.200 { 00:21:43.200 "trtype": "TCP" 00:21:43.200 } 00:21:43.200 ] 00:21:43.200 }, 00:21:43.200 { 00:21:43.200 "name": "nvmf_tgt_poll_group_003", 00:21:43.200 "admin_qpairs": 0, 00:21:43.200 "io_qpairs": 0, 00:21:43.200 "current_admin_qpairs": 0, 00:21:43.200 "current_io_qpairs": 0, 00:21:43.200 "pending_bdev_io": 0, 00:21:43.200 "completed_nvme_io": 0, 00:21:43.200 "transports": [ 00:21:43.200 { 00:21:43.200 "trtype": "TCP" 00:21:43.200 } 00:21:43.200 ] 00:21:43.200 } 00:21:43.200 ] 00:21:43.200 }' 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:43.200 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3796079 00:21:51.389 Initializing NVMe Controllers 00:21:51.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:51.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:51.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:51.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:51.389 Initialization complete. Launching workers. 00:21:51.389 ======================================================== 00:21:51.389 Latency(us) 00:21:51.389 Device Information : IOPS MiB/s Average min max 00:21:51.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6002.80 23.45 10666.89 1228.07 58579.91 00:21:51.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5882.90 22.98 10897.98 1052.33 57172.34 00:21:51.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5908.20 23.08 10837.14 1191.83 56467.36 00:21:51.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5554.50 21.70 11560.96 1458.93 55686.14 00:21:51.389 ======================================================== 00:21:51.389 Total : 23348.40 91.20 10980.89 1052.33 58579.91 00:21:51.389 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.389 rmmod nvme_tcp 00:21:51.389 rmmod nvme_fabrics 00:21:51.389 rmmod nvme_keyring 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3795969 ']' 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3795969 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3795969 ']' 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3795969 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.389 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3795969 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3795969' 00:21:51.648 killing process with pid 3795969 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3795969 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3795969 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.648 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:54.183 00:21:54.183 real 0m49.733s 00:21:54.183 user 2m46.524s 00:21:54.183 sys 0m10.775s 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.183 ************************************ 00:21:54.183 END TEST nvmf_perf_adq 00:21:54.183 ************************************ 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.183 ************************************ 00:21:54.183 START TEST nvmf_shutdown 00:21:54.183 ************************************ 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:54.183 * Looking for test storage... 00:21:54.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:54.183 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.183 --rc genhtml_branch_coverage=1 00:21:54.183 --rc genhtml_function_coverage=1 00:21:54.183 --rc genhtml_legend=1 00:21:54.183 --rc geninfo_all_blocks=1 00:21:54.183 --rc geninfo_unexecuted_blocks=1 00:21:54.183 00:21:54.183 ' 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.183 --rc genhtml_branch_coverage=1 00:21:54.183 --rc genhtml_function_coverage=1 00:21:54.183 --rc genhtml_legend=1 00:21:54.183 --rc geninfo_all_blocks=1 00:21:54.183 --rc geninfo_unexecuted_blocks=1 00:21:54.183 00:21:54.183 ' 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.183 --rc genhtml_branch_coverage=1 00:21:54.183 --rc genhtml_function_coverage=1 00:21:54.183 --rc genhtml_legend=1 00:21:54.183 --rc geninfo_all_blocks=1 00:21:54.183 --rc geninfo_unexecuted_blocks=1 00:21:54.183 00:21:54.183 ' 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.183 --rc genhtml_branch_coverage=1 00:21:54.183 --rc genhtml_function_coverage=1 00:21:54.183 --rc genhtml_legend=1 00:21:54.183 --rc geninfo_all_blocks=1 00:21:54.183 --rc geninfo_unexecuted_blocks=1 00:21:54.183 00:21:54.183 ' 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.183 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.184 ************************************ 00:21:54.184 START TEST nvmf_shutdown_tc1 00:21:54.184 ************************************ 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.184 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.752 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.752 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.752 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.752 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.753 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:00.753 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:00.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:22:00.753 00:22:00.753 --- 10.0.0.2 ping statistics --- 00:22:00.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.753 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:00.753 00:22:00.753 --- 10.0.0.1 ping statistics --- 00:22:00.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.753 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3801455 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3801455 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3801455 ']' 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.753 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.753 [2024-11-26 19:23:23.174365] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:00.753 [2024-11-26 19:23:23.174409] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.753 [2024-11-26 19:23:23.254195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.753 [2024-11-26 19:23:23.296206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.753 [2024-11-26 19:23:23.296245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.753 [2024-11-26 19:23:23.296251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.753 [2024-11-26 19:23:23.296257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.753 [2024-11-26 19:23:23.296262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.753 [2024-11-26 19:23:23.297852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.753 [2024-11-26 19:23:23.297957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.753 [2024-11-26 19:23:23.298041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.754 [2024-11-26 19:23:23.298041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.012 [2024-11-26 19:23:24.053981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.012 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.013 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.271 Malloc1 00:22:01.271 [2024-11-26 19:23:24.172995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.271 Malloc2 00:22:01.271 Malloc3 00:22:01.271 Malloc4 00:22:01.271 Malloc5 00:22:01.271 Malloc6 00:22:01.531 Malloc7 00:22:01.531 Malloc8 00:22:01.531 Malloc9 00:22:01.531 Malloc10 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3801732 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3801732 /var/tmp/bdevperf.sock 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3801732 ']' 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.531 { 00:22:01.531 "params": { 00:22:01.531 "name": "Nvme$subsystem", 00:22:01.531 "trtype": "$TEST_TRANSPORT", 00:22:01.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.531 "adrfam": "ipv4", 00:22:01.531 "trsvcid": "$NVMF_PORT", 00:22:01.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.531 "hdgst": ${hdgst:-false}, 00:22:01.531 "ddgst": ${ddgst:-false} 00:22:01.531 }, 00:22:01.531 "method": "bdev_nvme_attach_controller" 00:22:01.531 } 00:22:01.531 EOF 00:22:01.531 )") 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.531 { 00:22:01.531 "params": { 00:22:01.531 "name": "Nvme$subsystem", 00:22:01.531 "trtype": "$TEST_TRANSPORT", 00:22:01.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.531 "adrfam": "ipv4", 00:22:01.531 "trsvcid": "$NVMF_PORT", 00:22:01.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.531 "hdgst": ${hdgst:-false}, 00:22:01.531 "ddgst": ${ddgst:-false} 00:22:01.531 }, 00:22:01.531 "method": "bdev_nvme_attach_controller" 00:22:01.531 } 00:22:01.531 EOF 00:22:01.531 )") 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.531 { 00:22:01.531 "params": { 00:22:01.531 "name": "Nvme$subsystem", 00:22:01.531 "trtype": "$TEST_TRANSPORT", 00:22:01.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.531 "adrfam": "ipv4", 00:22:01.531 "trsvcid": "$NVMF_PORT", 00:22:01.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.531 "hdgst": ${hdgst:-false}, 00:22:01.531 "ddgst": ${ddgst:-false} 00:22:01.531 }, 00:22:01.531 "method": "bdev_nvme_attach_controller" 00:22:01.531 } 00:22:01.531 EOF 00:22:01.531 )") 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.531 { 00:22:01.531 "params": { 00:22:01.531 "name": "Nvme$subsystem", 00:22:01.531 "trtype": "$TEST_TRANSPORT", 00:22:01.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.531 "adrfam": "ipv4", 00:22:01.531 "trsvcid": "$NVMF_PORT", 00:22:01.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.531 "hdgst": ${hdgst:-false}, 00:22:01.531 "ddgst": ${ddgst:-false} 00:22:01.531 }, 00:22:01.531 "method": "bdev_nvme_attach_controller" 00:22:01.531 } 00:22:01.531 EOF 00:22:01.531 )") 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.531 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.531 { 00:22:01.531 "params": { 00:22:01.531 "name": "Nvme$subsystem", 00:22:01.531 "trtype": "$TEST_TRANSPORT", 00:22:01.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.531 "adrfam": "ipv4", 00:22:01.531 "trsvcid": "$NVMF_PORT", 00:22:01.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.532 "hdgst": ${hdgst:-false}, 00:22:01.532 "ddgst": ${ddgst:-false} 00:22:01.532 }, 00:22:01.532 "method": "bdev_nvme_attach_controller" 00:22:01.532 } 00:22:01.532 EOF 00:22:01.532 )") 00:22:01.532 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.532 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.532 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.532 { 00:22:01.532 "params": { 00:22:01.532 "name": "Nvme$subsystem", 00:22:01.532 "trtype": "$TEST_TRANSPORT", 00:22:01.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.532 "adrfam": "ipv4", 00:22:01.532 "trsvcid": "$NVMF_PORT", 00:22:01.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.532 "hdgst": ${hdgst:-false}, 00:22:01.532 "ddgst": ${ddgst:-false} 00:22:01.532 }, 00:22:01.532 "method": "bdev_nvme_attach_controller" 00:22:01.532 } 00:22:01.532 EOF 00:22:01.532 )") 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.791 { 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme$subsystem", 00:22:01.791 "trtype": "$TEST_TRANSPORT", 00:22:01.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "$NVMF_PORT", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.791 "hdgst": ${hdgst:-false}, 00:22:01.791 "ddgst": ${ddgst:-false} 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 } 00:22:01.791 EOF 00:22:01.791 )") 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.791 [2024-11-26 19:23:24.653269] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:01.791 [2024-11-26 19:23:24.653320] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.791 { 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme$subsystem", 00:22:01.791 "trtype": "$TEST_TRANSPORT", 00:22:01.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "$NVMF_PORT", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.791 "hdgst": ${hdgst:-false}, 00:22:01.791 "ddgst": ${ddgst:-false} 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 } 00:22:01.791 EOF 00:22:01.791 )") 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.791 { 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme$subsystem", 00:22:01.791 "trtype": "$TEST_TRANSPORT", 00:22:01.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "$NVMF_PORT", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.791 "hdgst": ${hdgst:-false}, 00:22:01.791 "ddgst": ${ddgst:-false} 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 } 00:22:01.791 EOF 00:22:01.791 )") 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.791 { 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme$subsystem", 00:22:01.791 "trtype": "$TEST_TRANSPORT", 00:22:01.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "$NVMF_PORT", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.791 "hdgst": ${hdgst:-false}, 00:22:01.791 "ddgst": ${ddgst:-false} 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 } 00:22:01.791 EOF 00:22:01.791 )") 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:01.791 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme1", 00:22:01.791 "trtype": "tcp", 00:22:01.791 "traddr": "10.0.0.2", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "4420", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.791 "hdgst": false, 00:22:01.791 "ddgst": false 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 },{ 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme2", 00:22:01.791 "trtype": "tcp", 00:22:01.791 "traddr": "10.0.0.2", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "4420", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.791 "hdgst": false, 00:22:01.791 "ddgst": false 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 },{ 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme3", 00:22:01.791 "trtype": "tcp", 00:22:01.791 "traddr": "10.0.0.2", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "4420", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:01.791 "hdgst": false, 00:22:01.791 "ddgst": false 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 },{ 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme4", 00:22:01.791 "trtype": "tcp", 00:22:01.791 "traddr": "10.0.0.2", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "4420", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:01.791 "hdgst": false, 00:22:01.791 "ddgst": false 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 },{ 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme5", 00:22:01.791 "trtype": "tcp", 00:22:01.791 "traddr": "10.0.0.2", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "4420", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:01.791 "hdgst": false, 00:22:01.791 "ddgst": false 00:22:01.791 }, 00:22:01.791 "method": "bdev_nvme_attach_controller" 00:22:01.791 },{ 00:22:01.791 "params": { 00:22:01.791 "name": "Nvme6", 00:22:01.791 "trtype": "tcp", 00:22:01.791 "traddr": "10.0.0.2", 00:22:01.791 "adrfam": "ipv4", 00:22:01.791 "trsvcid": "4420", 00:22:01.791 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:01.791 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:01.791 "hdgst": false, 00:22:01.792 "ddgst": false 00:22:01.792 }, 00:22:01.792 "method": "bdev_nvme_attach_controller" 00:22:01.792 },{ 00:22:01.792 "params": { 00:22:01.792 "name": "Nvme7", 00:22:01.792 "trtype": "tcp", 00:22:01.792 "traddr": "10.0.0.2", 00:22:01.792 "adrfam": "ipv4", 00:22:01.792 "trsvcid": "4420", 00:22:01.792 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:01.792 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:01.792 "hdgst": false, 00:22:01.792 "ddgst": false 00:22:01.792 }, 00:22:01.792 "method": "bdev_nvme_attach_controller" 00:22:01.792 },{ 00:22:01.792 "params": { 00:22:01.792 "name": "Nvme8", 00:22:01.792 "trtype": "tcp", 00:22:01.792 "traddr": "10.0.0.2", 00:22:01.792 "adrfam": "ipv4", 00:22:01.792 "trsvcid": "4420", 00:22:01.792 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:01.792 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:01.792 "hdgst": false, 00:22:01.792 "ddgst": false 00:22:01.792 }, 00:22:01.792 "method": "bdev_nvme_attach_controller" 00:22:01.792 },{ 00:22:01.792 "params": { 00:22:01.792 "name": "Nvme9", 00:22:01.792 "trtype": "tcp", 00:22:01.792 "traddr": "10.0.0.2", 00:22:01.792 "adrfam": "ipv4", 00:22:01.792 "trsvcid": "4420", 00:22:01.792 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:01.792 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:01.792 "hdgst": false, 00:22:01.792 "ddgst": false 00:22:01.792 }, 00:22:01.792 "method": "bdev_nvme_attach_controller" 00:22:01.792 },{ 00:22:01.792 "params": { 00:22:01.792 "name": "Nvme10", 00:22:01.792 "trtype": "tcp", 00:22:01.792 "traddr": "10.0.0.2", 00:22:01.792 "adrfam": "ipv4", 00:22:01.792 "trsvcid": "4420", 00:22:01.792 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:01.792 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:01.792 "hdgst": false, 00:22:01.792 "ddgst": false 00:22:01.792 }, 00:22:01.792 "method": "bdev_nvme_attach_controller" 00:22:01.792 }' 00:22:01.792 [2024-11-26 19:23:24.731664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.792 [2024-11-26 19:23:24.772458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3801732 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:03.694 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:04.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3801732 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3801455 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.630 { 00:22:04.630 "params": { 00:22:04.630 "name": "Nvme$subsystem", 00:22:04.630 "trtype": "$TEST_TRANSPORT", 00:22:04.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.630 "adrfam": "ipv4", 00:22:04.630 "trsvcid": "$NVMF_PORT", 00:22:04.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.630 "hdgst": ${hdgst:-false}, 00:22:04.630 "ddgst": ${ddgst:-false} 00:22:04.630 }, 00:22:04.630 "method": "bdev_nvme_attach_controller" 00:22:04.630 } 00:22:04.630 EOF 00:22:04.630 )") 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.630 { 00:22:04.630 "params": { 00:22:04.630 "name": "Nvme$subsystem", 00:22:04.630 "trtype": "$TEST_TRANSPORT", 00:22:04.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.630 "adrfam": "ipv4", 00:22:04.630 "trsvcid": "$NVMF_PORT", 00:22:04.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.630 "hdgst": ${hdgst:-false}, 00:22:04.630 "ddgst": ${ddgst:-false} 00:22:04.630 }, 00:22:04.630 "method": "bdev_nvme_attach_controller" 00:22:04.630 } 00:22:04.630 EOF 00:22:04.630 )") 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.630 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.630 { 00:22:04.630 "params": { 00:22:04.630 "name": "Nvme$subsystem", 00:22:04.630 "trtype": "$TEST_TRANSPORT", 00:22:04.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.630 "adrfam": "ipv4", 00:22:04.630 "trsvcid": "$NVMF_PORT", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.631 "hdgst": ${hdgst:-false}, 00:22:04.631 "ddgst": ${ddgst:-false} 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 } 00:22:04.631 EOF 00:22:04.631 )") 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.631 { 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme$subsystem", 00:22:04.631 "trtype": "$TEST_TRANSPORT", 00:22:04.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "$NVMF_PORT", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.631 "hdgst": ${hdgst:-false}, 00:22:04.631 "ddgst": ${ddgst:-false} 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 } 00:22:04.631 EOF 00:22:04.631 )") 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.631 { 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme$subsystem", 00:22:04.631 "trtype": "$TEST_TRANSPORT", 00:22:04.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "$NVMF_PORT", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.631 "hdgst": ${hdgst:-false}, 00:22:04.631 "ddgst": ${ddgst:-false} 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 } 00:22:04.631 EOF 00:22:04.631 )") 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.631 { 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme$subsystem", 00:22:04.631 "trtype": "$TEST_TRANSPORT", 00:22:04.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "$NVMF_PORT", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.631 "hdgst": ${hdgst:-false}, 00:22:04.631 "ddgst": ${ddgst:-false} 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 } 00:22:04.631 EOF 00:22:04.631 )") 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.631 { 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme$subsystem", 00:22:04.631 "trtype": "$TEST_TRANSPORT", 00:22:04.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "$NVMF_PORT", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.631 "hdgst": ${hdgst:-false}, 00:22:04.631 "ddgst": ${ddgst:-false} 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 } 00:22:04.631 EOF 00:22:04.631 )") 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.631 [2024-11-26 19:23:27.581656] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:04.631 [2024-11-26 19:23:27.581711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802221 ] 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.631 { 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme$subsystem", 00:22:04.631 "trtype": "$TEST_TRANSPORT", 00:22:04.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "$NVMF_PORT", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.631 "hdgst": ${hdgst:-false}, 00:22:04.631 "ddgst": ${ddgst:-false} 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 } 00:22:04.631 EOF 00:22:04.631 )") 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.631 { 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme$subsystem", 00:22:04.631 "trtype": "$TEST_TRANSPORT", 00:22:04.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "$NVMF_PORT", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.631 "hdgst": ${hdgst:-false}, 00:22:04.631 "ddgst": ${ddgst:-false} 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 } 00:22:04.631 EOF 00:22:04.631 )") 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.631 { 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme$subsystem", 00:22:04.631 "trtype": "$TEST_TRANSPORT", 00:22:04.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "$NVMF_PORT", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.631 "hdgst": ${hdgst:-false}, 00:22:04.631 "ddgst": ${ddgst:-false} 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 } 00:22:04.631 EOF 00:22:04.631 )") 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:04.631 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme1", 00:22:04.631 "trtype": "tcp", 00:22:04.631 "traddr": "10.0.0.2", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "4420", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.631 "hdgst": false, 00:22:04.631 "ddgst": false 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 },{ 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme2", 00:22:04.631 "trtype": "tcp", 00:22:04.631 "traddr": "10.0.0.2", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "4420", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:04.631 "hdgst": false, 00:22:04.631 "ddgst": false 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 },{ 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme3", 00:22:04.631 "trtype": "tcp", 00:22:04.631 "traddr": "10.0.0.2", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "4420", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:04.631 "hdgst": false, 00:22:04.631 "ddgst": false 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 },{ 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme4", 00:22:04.631 "trtype": "tcp", 00:22:04.631 "traddr": "10.0.0.2", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "4420", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:04.631 "hdgst": false, 00:22:04.631 "ddgst": false 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.631 },{ 00:22:04.631 "params": { 00:22:04.631 "name": "Nvme5", 00:22:04.631 "trtype": "tcp", 00:22:04.631 "traddr": "10.0.0.2", 00:22:04.631 "adrfam": "ipv4", 00:22:04.631 "trsvcid": "4420", 00:22:04.631 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:04.631 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:04.631 "hdgst": false, 00:22:04.631 "ddgst": false 00:22:04.631 }, 00:22:04.631 "method": "bdev_nvme_attach_controller" 00:22:04.632 },{ 00:22:04.632 "params": { 00:22:04.632 "name": "Nvme6", 00:22:04.632 "trtype": "tcp", 00:22:04.632 "traddr": "10.0.0.2", 00:22:04.632 "adrfam": "ipv4", 00:22:04.632 "trsvcid": "4420", 00:22:04.632 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:04.632 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:04.632 "hdgst": false, 00:22:04.632 "ddgst": false 00:22:04.632 }, 00:22:04.632 "method": "bdev_nvme_attach_controller" 00:22:04.632 },{ 00:22:04.632 "params": { 00:22:04.632 "name": "Nvme7", 00:22:04.632 "trtype": "tcp", 00:22:04.632 "traddr": "10.0.0.2", 00:22:04.632 "adrfam": "ipv4", 00:22:04.632 "trsvcid": "4420", 00:22:04.632 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:04.632 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:04.632 "hdgst": false, 00:22:04.632 "ddgst": false 00:22:04.632 }, 00:22:04.632 "method": "bdev_nvme_attach_controller" 00:22:04.632 },{ 00:22:04.632 "params": { 00:22:04.632 "name": "Nvme8", 00:22:04.632 "trtype": "tcp", 00:22:04.632 "traddr": "10.0.0.2", 00:22:04.632 "adrfam": "ipv4", 00:22:04.632 "trsvcid": "4420", 00:22:04.632 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:04.632 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:04.632 "hdgst": false, 00:22:04.632 "ddgst": false 00:22:04.632 }, 00:22:04.632 "method": "bdev_nvme_attach_controller" 00:22:04.632 },{ 00:22:04.632 "params": { 00:22:04.632 "name": "Nvme9", 00:22:04.632 "trtype": "tcp", 00:22:04.632 "traddr": "10.0.0.2", 00:22:04.632 "adrfam": "ipv4", 00:22:04.632 "trsvcid": "4420", 00:22:04.632 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:04.632 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:04.632 "hdgst": false, 00:22:04.632 "ddgst": false 00:22:04.632 }, 00:22:04.632 "method": "bdev_nvme_attach_controller" 00:22:04.632 },{ 00:22:04.632 "params": { 00:22:04.632 "name": "Nvme10", 00:22:04.632 "trtype": "tcp", 00:22:04.632 "traddr": "10.0.0.2", 00:22:04.632 "adrfam": "ipv4", 00:22:04.632 "trsvcid": "4420", 00:22:04.632 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:04.632 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:04.632 "hdgst": false, 00:22:04.632 "ddgst": false 00:22:04.632 }, 00:22:04.632 "method": "bdev_nvme_attach_controller" 00:22:04.632 }' 00:22:04.632 [2024-11-26 19:23:27.657059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.632 [2024-11-26 19:23:27.698772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.010 Running I/O for 1 seconds... 00:22:06.950 2249.00 IOPS, 140.56 MiB/s 00:22:06.950 Latency(us) 00:22:06.950 [2024-11-26T18:23:30.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.950 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.950 Verification LBA range: start 0x0 length 0x400 00:22:06.950 Nvme1n1 : 1.14 280.60 17.54 0.00 0.00 224933.21 19099.06 227690.79 00:22:06.950 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.950 Verification LBA range: start 0x0 length 0x400 00:22:06.950 Nvme2n1 : 1.05 249.60 15.60 0.00 0.00 245442.96 1919.27 227690.79 00:22:06.950 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.950 Verification LBA range: start 0x0 length 0x400 00:22:06.950 Nvme3n1 : 1.10 290.80 18.18 0.00 0.00 210086.08 14730.00 211712.49 00:22:06.950 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.950 Verification LBA range: start 0x0 length 0x400 00:22:06.950 Nvme4n1 : 1.14 281.33 17.58 0.00 0.00 216170.30 16352.79 196732.83 00:22:06.950 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.950 Verification LBA range: start 0x0 length 0x400 00:22:06.950 Nvme5n1 : 1.13 282.35 17.65 0.00 0.00 212238.14 14792.41 219701.64 00:22:06.950 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.950 Verification LBA range: start 0x0 length 0x400 00:22:06.950 Nvme6n1 : 1.15 277.89 17.37 0.00 0.00 212848.25 16727.28 216705.71 00:22:06.950 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.950 Verification LBA range: start 0x0 length 0x400 00:22:06.950 Nvme7n1 : 1.15 282.48 17.65 0.00 0.00 206083.65 2387.38 215707.06 00:22:06.950 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.950 Verification LBA range: start 0x0 length 0x400 00:22:06.950 Nvme8n1 : 1.13 284.12 17.76 0.00 0.00 201621.60 15291.73 214708.42 00:22:06.950 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.951 Verification LBA range: start 0x0 length 0x400 00:22:06.951 Nvme9n1 : 1.15 277.27 17.33 0.00 0.00 204035.80 17101.78 221698.93 00:22:06.951 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.951 Verification LBA range: start 0x0 length 0x400 00:22:06.951 Nvme10n1 : 1.16 276.59 17.29 0.00 0.00 201615.26 13232.03 239674.51 00:22:06.951 [2024-11-26T18:23:30.065Z] =================================================================================================================== 00:22:06.951 [2024-11-26T18:23:30.065Z] Total : 2783.04 173.94 0.00 0.00 212898.98 1919.27 239674.51 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:07.209 rmmod nvme_tcp 00:22:07.209 rmmod nvme_fabrics 00:22:07.209 rmmod nvme_keyring 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3801455 ']' 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3801455 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3801455 ']' 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3801455 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3801455 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3801455' 00:22:07.209 killing process with pid 3801455 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3801455 00:22:07.209 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3801455 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.778 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.682 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.682 00:22:09.682 real 0m15.609s 00:22:09.682 user 0m35.096s 00:22:09.682 sys 0m5.895s 00:22:09.682 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.682 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.682 ************************************ 00:22:09.682 END TEST nvmf_shutdown_tc1 00:22:09.682 ************************************ 00:22:09.682 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:09.682 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:09.682 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.682 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:09.943 ************************************ 00:22:09.943 START TEST nvmf_shutdown_tc2 00:22:09.943 ************************************ 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.943 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.944 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.944 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.944 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:22:10.204 00:22:10.204 --- 10.0.0.2 ping statistics --- 00:22:10.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.204 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:22:10.204 00:22:10.204 --- 10.0.0.1 ping statistics --- 00:22:10.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.204 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3803251 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3803251 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3803251 ']' 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.204 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.204 [2024-11-26 19:23:33.190538] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:10.204 [2024-11-26 19:23:33.190589] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.204 [2024-11-26 19:23:33.271823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.204 [2024-11-26 19:23:33.314349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.204 [2024-11-26 19:23:33.314387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.204 [2024-11-26 19:23:33.314395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.204 [2024-11-26 19:23:33.314400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.204 [2024-11-26 19:23:33.314406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.204 [2024-11-26 19:23:33.316018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.204 [2024-11-26 19:23:33.316038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.204 [2024-11-26 19:23:33.316127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.204 [2024-11-26 19:23:33.316127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.464 [2024-11-26 19:23:33.462216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.464 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.464 Malloc1 00:22:10.722 [2024-11-26 19:23:33.578994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.722 Malloc2 00:22:10.722 Malloc3 00:22:10.722 Malloc4 00:22:10.722 Malloc5 00:22:10.722 Malloc6 00:22:10.722 Malloc7 00:22:10.983 Malloc8 00:22:10.983 Malloc9 00:22:10.983 Malloc10 00:22:10.983 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.983 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:10.983 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.983 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3803313 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3803313 /var/tmp/bdevperf.sock 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3803313 ']' 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.983 { 00:22:10.983 "params": { 00:22:10.983 "name": "Nvme$subsystem", 00:22:10.983 "trtype": "$TEST_TRANSPORT", 00:22:10.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.983 "adrfam": "ipv4", 00:22:10.983 "trsvcid": "$NVMF_PORT", 00:22:10.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.983 "hdgst": ${hdgst:-false}, 00:22:10.983 "ddgst": ${ddgst:-false} 00:22:10.983 }, 00:22:10.983 "method": "bdev_nvme_attach_controller" 00:22:10.983 } 00:22:10.983 EOF 00:22:10.983 )") 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.983 { 00:22:10.983 "params": { 00:22:10.983 "name": "Nvme$subsystem", 00:22:10.983 "trtype": "$TEST_TRANSPORT", 00:22:10.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.983 "adrfam": "ipv4", 00:22:10.983 "trsvcid": "$NVMF_PORT", 00:22:10.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.983 "hdgst": ${hdgst:-false}, 00:22:10.983 "ddgst": ${ddgst:-false} 00:22:10.983 }, 00:22:10.983 "method": "bdev_nvme_attach_controller" 00:22:10.983 } 00:22:10.983 EOF 00:22:10.983 )") 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.983 { 00:22:10.983 "params": { 00:22:10.983 "name": "Nvme$subsystem", 00:22:10.983 "trtype": "$TEST_TRANSPORT", 00:22:10.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.983 "adrfam": "ipv4", 00:22:10.983 "trsvcid": "$NVMF_PORT", 00:22:10.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.983 "hdgst": ${hdgst:-false}, 00:22:10.983 "ddgst": ${ddgst:-false} 00:22:10.983 }, 00:22:10.983 "method": "bdev_nvme_attach_controller" 00:22:10.983 } 00:22:10.983 EOF 00:22:10.983 )") 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.983 { 00:22:10.983 "params": { 00:22:10.983 "name": "Nvme$subsystem", 00:22:10.983 "trtype": "$TEST_TRANSPORT", 00:22:10.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.983 "adrfam": "ipv4", 00:22:10.983 "trsvcid": "$NVMF_PORT", 00:22:10.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.983 "hdgst": ${hdgst:-false}, 00:22:10.983 "ddgst": ${ddgst:-false} 00:22:10.983 }, 00:22:10.983 "method": "bdev_nvme_attach_controller" 00:22:10.983 } 00:22:10.983 EOF 00:22:10.983 )") 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.983 { 00:22:10.983 "params": { 00:22:10.983 "name": "Nvme$subsystem", 00:22:10.983 "trtype": "$TEST_TRANSPORT", 00:22:10.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.983 "adrfam": "ipv4", 00:22:10.983 "trsvcid": "$NVMF_PORT", 00:22:10.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.983 "hdgst": ${hdgst:-false}, 00:22:10.983 "ddgst": ${ddgst:-false} 00:22:10.983 }, 00:22:10.983 "method": "bdev_nvme_attach_controller" 00:22:10.983 } 00:22:10.983 EOF 00:22:10.983 )") 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.983 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.983 { 00:22:10.983 "params": { 00:22:10.983 "name": "Nvme$subsystem", 00:22:10.983 "trtype": "$TEST_TRANSPORT", 00:22:10.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.983 "adrfam": "ipv4", 00:22:10.983 "trsvcid": "$NVMF_PORT", 00:22:10.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.983 "hdgst": ${hdgst:-false}, 00:22:10.983 "ddgst": ${ddgst:-false} 00:22:10.983 }, 00:22:10.983 "method": "bdev_nvme_attach_controller" 00:22:10.983 } 00:22:10.983 EOF 00:22:10.983 )") 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.984 { 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme$subsystem", 00:22:10.984 "trtype": "$TEST_TRANSPORT", 00:22:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "$NVMF_PORT", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.984 "hdgst": ${hdgst:-false}, 00:22:10.984 "ddgst": ${ddgst:-false} 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 } 00:22:10.984 EOF 00:22:10.984 )") 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.984 [2024-11-26 19:23:34.052299] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:10.984 [2024-11-26 19:23:34.052348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803313 ] 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.984 { 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme$subsystem", 00:22:10.984 "trtype": "$TEST_TRANSPORT", 00:22:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "$NVMF_PORT", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.984 "hdgst": ${hdgst:-false}, 00:22:10.984 "ddgst": ${ddgst:-false} 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 } 00:22:10.984 EOF 00:22:10.984 )") 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.984 { 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme$subsystem", 00:22:10.984 "trtype": "$TEST_TRANSPORT", 00:22:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "$NVMF_PORT", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.984 "hdgst": ${hdgst:-false}, 00:22:10.984 "ddgst": ${ddgst:-false} 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 } 00:22:10.984 EOF 00:22:10.984 )") 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.984 { 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme$subsystem", 00:22:10.984 "trtype": "$TEST_TRANSPORT", 00:22:10.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "$NVMF_PORT", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.984 "hdgst": ${hdgst:-false}, 00:22:10.984 "ddgst": ${ddgst:-false} 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 } 00:22:10.984 EOF 00:22:10.984 )") 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:10.984 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme1", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "4420", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.984 "hdgst": false, 00:22:10.984 "ddgst": false 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 },{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme2", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "4420", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:10.984 "hdgst": false, 00:22:10.984 "ddgst": false 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 },{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme3", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "4420", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:10.984 "hdgst": false, 00:22:10.984 "ddgst": false 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 },{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme4", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "4420", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:10.984 "hdgst": false, 00:22:10.984 "ddgst": false 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 },{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme5", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "4420", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:10.984 "hdgst": false, 00:22:10.984 "ddgst": false 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 },{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme6", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "4420", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:10.984 "hdgst": false, 00:22:10.984 "ddgst": false 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 },{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme7", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "4420", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:10.984 "hdgst": false, 00:22:10.984 "ddgst": false 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 },{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme8", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.984 "adrfam": "ipv4", 00:22:10.984 "trsvcid": "4420", 00:22:10.984 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:10.984 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:10.984 "hdgst": false, 00:22:10.984 "ddgst": false 00:22:10.984 }, 00:22:10.984 "method": "bdev_nvme_attach_controller" 00:22:10.984 },{ 00:22:10.984 "params": { 00:22:10.984 "name": "Nvme9", 00:22:10.984 "trtype": "tcp", 00:22:10.984 "traddr": "10.0.0.2", 00:22:10.985 "adrfam": "ipv4", 00:22:10.985 "trsvcid": "4420", 00:22:10.985 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:10.985 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:10.985 "hdgst": false, 00:22:10.985 "ddgst": false 00:22:10.985 }, 00:22:10.985 "method": "bdev_nvme_attach_controller" 00:22:10.985 },{ 00:22:10.985 "params": { 00:22:10.985 "name": "Nvme10", 00:22:10.985 "trtype": "tcp", 00:22:10.985 "traddr": "10.0.0.2", 00:22:10.985 "adrfam": "ipv4", 00:22:10.985 "trsvcid": "4420", 00:22:10.985 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:10.985 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:10.985 "hdgst": false, 00:22:10.985 "ddgst": false 00:22:10.985 }, 00:22:10.985 "method": "bdev_nvme_attach_controller" 00:22:10.985 }' 00:22:11.243 [2024-11-26 19:23:34.127556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.243 [2024-11-26 19:23:34.168817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.151 Running I/O for 10 seconds... 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.151 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.152 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:13.152 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:13.152 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3803313 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3803313 ']' 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3803313 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3803313 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3803313' 00:22:13.430 killing process with pid 3803313 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3803313 00:22:13.430 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3803313 00:22:13.430 Received shutdown signal, test time was about 0.662280 seconds 00:22:13.430 00:22:13.430 Latency(us) 00:22:13.430 [2024-11-26T18:23:36.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.430 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme1n1 : 0.65 295.96 18.50 0.00 0.00 212803.45 25839.91 197731.47 00:22:13.430 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme2n1 : 0.64 298.02 18.63 0.00 0.00 205838.30 25964.74 184749.10 00:22:13.430 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme3n1 : 0.64 311.64 19.48 0.00 0.00 190362.95 5960.66 207717.91 00:22:13.430 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme4n1 : 0.64 298.80 18.68 0.00 0.00 195142.87 25839.91 182751.82 00:22:13.430 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme5n1 : 0.66 290.19 18.14 0.00 0.00 196472.12 18474.91 196732.83 00:22:13.430 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme6n1 : 0.66 290.49 18.16 0.00 0.00 190432.30 14667.58 213709.78 00:22:13.430 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme7n1 : 0.65 293.90 18.37 0.00 0.00 183684.31 14542.75 218702.99 00:22:13.430 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme8n1 : 0.66 292.27 18.27 0.00 0.00 179789.37 16602.45 215707.06 00:22:13.430 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme9n1 : 0.63 204.42 12.78 0.00 0.00 246711.83 30333.81 226692.14 00:22:13.430 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.430 Verification LBA range: start 0x0 length 0x400 00:22:13.430 Nvme10n1 : 0.63 201.83 12.61 0.00 0.00 241852.71 23093.64 241671.80 00:22:13.430 [2024-11-26T18:23:36.544Z] =================================================================================================================== 00:22:13.430 [2024-11-26T18:23:36.544Z] Total : 2777.53 173.60 0.00 0.00 201410.63 5960.66 241671.80 00:22:13.689 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:14.625 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3803251 00:22:14.625 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:14.625 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.626 rmmod nvme_tcp 00:22:14.626 rmmod nvme_fabrics 00:22:14.626 rmmod nvme_keyring 00:22:14.626 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.884 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:14.884 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:14.884 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3803251 ']' 00:22:14.884 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3803251 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3803251 ']' 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3803251 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3803251 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3803251' 00:22:14.885 killing process with pid 3803251 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3803251 00:22:14.885 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3803251 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.145 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.683 00:22:17.683 real 0m7.420s 00:22:17.683 user 0m21.918s 00:22:17.683 sys 0m1.274s 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.683 ************************************ 00:22:17.683 END TEST nvmf_shutdown_tc2 00:22:17.683 ************************************ 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:17.683 ************************************ 00:22:17.683 START TEST nvmf_shutdown_tc3 00:22:17.683 ************************************ 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.683 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:17.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:17.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:17.684 Found net devices under 0000:86:00.0: cvl_0_0 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:17.684 Found net devices under 0000:86:00.1: cvl_0_1 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.684 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:22:17.685 00:22:17.685 --- 10.0.0.2 ping statistics --- 00:22:17.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.685 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:22:17.685 00:22:17.685 --- 10.0.0.1 ping statistics --- 00:22:17.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.685 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3804574 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3804574 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3804574 ']' 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.685 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.685 [2024-11-26 19:23:40.672733] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:17.685 [2024-11-26 19:23:40.672774] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.685 [2024-11-26 19:23:40.752384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.685 [2024-11-26 19:23:40.792268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.685 [2024-11-26 19:23:40.792308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.685 [2024-11-26 19:23:40.792315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.685 [2024-11-26 19:23:40.792321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.685 [2024-11-26 19:23:40.792327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.685 [2024-11-26 19:23:40.793822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.685 [2024-11-26 19:23:40.793854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.685 [2024-11-26 19:23:40.793961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.685 [2024-11-26 19:23:40.793962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.621 [2024-11-26 19:23:41.553560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.621 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.622 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.622 Malloc1 00:22:18.622 [2024-11-26 19:23:41.657347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.622 Malloc2 00:22:18.622 Malloc3 00:22:18.880 Malloc4 00:22:18.880 Malloc5 00:22:18.880 Malloc6 00:22:18.880 Malloc7 00:22:18.880 Malloc8 00:22:18.880 Malloc9 00:22:19.139 Malloc10 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3804852 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3804852 /var/tmp/bdevperf.sock 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3804852 ']' 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.139 { 00:22:19.139 "params": { 00:22:19.139 "name": "Nvme$subsystem", 00:22:19.139 "trtype": "$TEST_TRANSPORT", 00:22:19.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.139 "adrfam": "ipv4", 00:22:19.139 "trsvcid": "$NVMF_PORT", 00:22:19.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.139 "hdgst": ${hdgst:-false}, 00:22:19.139 "ddgst": ${ddgst:-false} 00:22:19.139 }, 00:22:19.139 "method": "bdev_nvme_attach_controller" 00:22:19.139 } 00:22:19.139 EOF 00:22:19.139 )") 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.139 { 00:22:19.139 "params": { 00:22:19.139 "name": "Nvme$subsystem", 00:22:19.139 "trtype": "$TEST_TRANSPORT", 00:22:19.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.139 "adrfam": "ipv4", 00:22:19.139 "trsvcid": "$NVMF_PORT", 00:22:19.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.139 "hdgst": ${hdgst:-false}, 00:22:19.139 "ddgst": ${ddgst:-false} 00:22:19.139 }, 00:22:19.139 "method": "bdev_nvme_attach_controller" 00:22:19.139 } 00:22:19.139 EOF 00:22:19.139 )") 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.139 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.140 { 00:22:19.140 "params": { 00:22:19.140 "name": "Nvme$subsystem", 00:22:19.140 "trtype": "$TEST_TRANSPORT", 00:22:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.140 "adrfam": "ipv4", 00:22:19.140 "trsvcid": "$NVMF_PORT", 00:22:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.140 "hdgst": ${hdgst:-false}, 00:22:19.140 "ddgst": ${ddgst:-false} 00:22:19.140 }, 00:22:19.140 "method": "bdev_nvme_attach_controller" 00:22:19.140 } 00:22:19.140 EOF 00:22:19.140 )") 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.140 { 00:22:19.140 "params": { 00:22:19.140 "name": "Nvme$subsystem", 00:22:19.140 "trtype": "$TEST_TRANSPORT", 00:22:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.140 "adrfam": "ipv4", 00:22:19.140 "trsvcid": "$NVMF_PORT", 00:22:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.140 "hdgst": ${hdgst:-false}, 00:22:19.140 "ddgst": ${ddgst:-false} 00:22:19.140 }, 00:22:19.140 "method": "bdev_nvme_attach_controller" 00:22:19.140 } 00:22:19.140 EOF 00:22:19.140 )") 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.140 { 00:22:19.140 "params": { 00:22:19.140 "name": "Nvme$subsystem", 00:22:19.140 "trtype": "$TEST_TRANSPORT", 00:22:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.140 "adrfam": "ipv4", 00:22:19.140 "trsvcid": "$NVMF_PORT", 00:22:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.140 "hdgst": ${hdgst:-false}, 00:22:19.140 "ddgst": ${ddgst:-false} 00:22:19.140 }, 00:22:19.140 "method": "bdev_nvme_attach_controller" 00:22:19.140 } 00:22:19.140 EOF 00:22:19.140 )") 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.140 { 00:22:19.140 "params": { 00:22:19.140 "name": "Nvme$subsystem", 00:22:19.140 "trtype": "$TEST_TRANSPORT", 00:22:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.140 "adrfam": "ipv4", 00:22:19.140 "trsvcid": "$NVMF_PORT", 00:22:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.140 "hdgst": ${hdgst:-false}, 00:22:19.140 "ddgst": ${ddgst:-false} 00:22:19.140 }, 00:22:19.140 "method": "bdev_nvme_attach_controller" 00:22:19.140 } 00:22:19.140 EOF 00:22:19.140 )") 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.140 [2024-11-26 19:23:42.126962] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:19.140 [2024-11-26 19:23:42.127012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804852 ] 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.140 { 00:22:19.140 "params": { 00:22:19.140 "name": "Nvme$subsystem", 00:22:19.140 "trtype": "$TEST_TRANSPORT", 00:22:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.140 "adrfam": "ipv4", 00:22:19.140 "trsvcid": "$NVMF_PORT", 00:22:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.140 "hdgst": ${hdgst:-false}, 00:22:19.140 "ddgst": ${ddgst:-false} 00:22:19.140 }, 00:22:19.140 "method": "bdev_nvme_attach_controller" 00:22:19.140 } 00:22:19.140 EOF 00:22:19.140 )") 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.140 { 00:22:19.140 "params": { 00:22:19.140 "name": "Nvme$subsystem", 00:22:19.140 "trtype": "$TEST_TRANSPORT", 00:22:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.140 "adrfam": "ipv4", 00:22:19.140 "trsvcid": "$NVMF_PORT", 00:22:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.140 "hdgst": ${hdgst:-false}, 00:22:19.140 "ddgst": ${ddgst:-false} 00:22:19.140 }, 00:22:19.140 "method": "bdev_nvme_attach_controller" 00:22:19.140 } 00:22:19.140 EOF 00:22:19.140 )") 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.140 { 00:22:19.140 "params": { 00:22:19.140 "name": "Nvme$subsystem", 00:22:19.140 "trtype": "$TEST_TRANSPORT", 00:22:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.140 "adrfam": "ipv4", 00:22:19.140 "trsvcid": "$NVMF_PORT", 00:22:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.140 "hdgst": ${hdgst:-false}, 00:22:19.140 "ddgst": ${ddgst:-false} 00:22:19.140 }, 00:22:19.140 "method": "bdev_nvme_attach_controller" 00:22:19.140 } 00:22:19.140 EOF 00:22:19.140 )") 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.140 { 00:22:19.140 "params": { 00:22:19.140 "name": "Nvme$subsystem", 00:22:19.140 "trtype": "$TEST_TRANSPORT", 00:22:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.140 "adrfam": "ipv4", 00:22:19.140 "trsvcid": "$NVMF_PORT", 00:22:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.140 "hdgst": ${hdgst:-false}, 00:22:19.140 "ddgst": ${ddgst:-false} 00:22:19.140 }, 00:22:19.140 "method": "bdev_nvme_attach_controller" 00:22:19.140 } 00:22:19.140 EOF 00:22:19.140 )") 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:19.140 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:19.141 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme1", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme2", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme3", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme4", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme5", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme6", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme7", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme8", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme9", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 },{ 00:22:19.141 "params": { 00:22:19.141 "name": "Nvme10", 00:22:19.141 "trtype": "tcp", 00:22:19.141 "traddr": "10.0.0.2", 00:22:19.141 "adrfam": "ipv4", 00:22:19.141 "trsvcid": "4420", 00:22:19.141 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:19.141 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:19.141 "hdgst": false, 00:22:19.141 "ddgst": false 00:22:19.141 }, 00:22:19.141 "method": "bdev_nvme_attach_controller" 00:22:19.141 }' 00:22:19.141 [2024-11-26 19:23:42.202331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.141 [2024-11-26 19:23:42.242946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.049 Running I/O for 10 seconds... 00:22:21.049 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.049 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:21.049 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:21.049 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.049 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:21.307 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3804574 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3804574 ']' 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3804574 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3804574 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3804574' 00:22:21.573 killing process with pid 3804574 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3804574 00:22:21.573 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3804574 00:22:21.573 [2024-11-26 19:23:44.653078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.573 [2024-11-26 19:23:44.653446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.653530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61850 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.654992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.655115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64400 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.656178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.656189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.656196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.656202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.656208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.574 [2024-11-26 19:23:44.656214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.656338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d20 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.657963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa621f0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.658896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.658919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.658926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.658932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.658943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.575 [2024-11-26 19:23:44.658949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.658956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.658962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.658968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.658975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.658981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.658988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.658994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa626e0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.659981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.576 [2024-11-26 19:23:44.660176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62bb0 is same with the state(6) to be set 00:22:21.577 [2024-11-26 19:23:44.660410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.577 [2024-11-26 19:23:44.660688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.577 [2024-11-26 19:23:44.660695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.660989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.660995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:1[2024-11-26 19:23:44.661170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 he state(6) to be set 00:22:21.578 [2024-11-26 19:23:44.661183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 [2024-11-26 19:23:44.661185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.578 [2024-11-26 19:23:44.661191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1[2024-11-26 19:23:44.661193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 he state(6) to be set 00:22:21.578 [2024-11-26 19:23:44.661201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 19:23:44.661201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.578 he state(6) to be set 00:22:21.578 [2024-11-26 19:23:44.661210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.578 [2024-11-26 19:23:44.661212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.578 [2024-11-26 19:23:44.661216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.578 [2024-11-26 19:23:44.661219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with t[2024-11-26 19:23:44.661228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:1he state(6) to be set 00:22:21.579 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with t[2024-11-26 19:23:44.661238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.579 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with t[2024-11-26 19:23:44.661297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1he state(6) to be set 00:22:21.579 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with t[2024-11-26 19:23:44.661306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:21.579 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 19:23:44.661376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 he state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.579 [2024-11-26 19:23:44.661404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa630a0 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.661908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.579 [2024-11-26 19:23:44.661931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.579 [2024-11-26 19:23:44.661945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.579 [2024-11-26 19:23:44.661958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.579 [2024-11-26 19:23:44.661972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.661978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1d30 is same with the state(6) to be set 00:22:21.579 [2024-11-26 19:23:44.662004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.579 [2024-11-26 19:23:44.662012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.662019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.579 [2024-11-26 19:23:44.662025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.662033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.579 [2024-11-26 19:23:44.662042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.662049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.579 [2024-11-26 19:23:44.662055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.579 [2024-11-26 19:23:44.662062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271d4c0 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2712c60 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x275a8b0 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2206610 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with t[2024-11-26 19:23:44.662369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(6) to be set 00:22:21.580 id:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with t[2024-11-26 19:23:44.662378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:21.580 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e6200 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 19:23:44.662482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 he state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1300 is same [2024-11-26 19:23:44.662490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with twith the state(6) to be set 00:22:21.580 he state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 19:23:44.662524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 he state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 19:23:44.662544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 he state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with t[2024-11-26 19:23:44.662555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(6) to be set 00:22:21.580 id:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.580 [2024-11-26 19:23:44.662570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.580 [2024-11-26 19:23:44.662573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.580 [2024-11-26 19:23:44.662578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.581 [2024-11-26 19:23:44.662585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f21c0 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.662783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63570 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.663698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.581 [2024-11-26 19:23:44.664063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:21.581 [2024-11-26 19:23:44.664102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e6200 (9): Bad file descriptor 00:22:21.581 [2024-11-26 19:23:44.664213] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.581 [2024-11-26 19:23:44.664539] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.581 [2024-11-26 19:23:44.664603] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.581 [2024-11-26 19:23:44.664854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.581 [2024-11-26 19:23:44.664865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.581 [2024-11-26 19:23:44.664877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.581 [2024-11-26 19:23:44.664884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.581 [2024-11-26 19:23:44.664899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.581 [2024-11-26 19:23:44.664905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.581 [2024-11-26 19:23:44.664913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.581 [2024-11-26 19:23:44.664920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.581 [2024-11-26 19:23:44.664928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.581 [2024-11-26 19:23:44.664934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.581 [2024-11-26 19:23:44.664942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.664948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.664956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.664963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.664971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.664977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.664985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.664991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.665430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.582 [2024-11-26 19:23:44.665467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.582 [2024-11-26 19:23:44.678117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.582 [2024-11-26 19:23:44.678129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.582 [2024-11-26 19:23:44.678138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.582 [2024-11-26 19:23:44.678147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.582 [2024-11-26 19:23:44.678156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.678366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63a40 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.583 [2024-11-26 19:23:44.679183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.851 [2024-11-26 19:23:44.679189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.851 [2024-11-26 19:23:44.679195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.851 [2024-11-26 19:23:44.679201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.851 [2024-11-26 19:23:44.679207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.679464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63f10 is same with the state(6) to be set 00:22:21.852 [2024-11-26 19:23:44.680511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.852 [2024-11-26 19:23:44.680870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.852 [2024-11-26 19:23:44.680879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.680889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.680898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.680908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.680917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.680928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.680938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.680948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.680957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.680967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.680976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.680986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.680995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.681014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.681033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8200 is same with the state(6) to be set 00:22:21.853 [2024-11-26 19:23:44.681162] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.853 [2024-11-26 19:23:44.681688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.853 [2024-11-26 19:23:44.681711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e6200 with addr=10.0.0.2, port=4420 00:22:21.853 [2024-11-26 19:23:44.681721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e6200 is same with the state(6) to be set 00:22:21.853 [2024-11-26 19:23:44.681775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.853 [2024-11-26 19:23:44.681787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.853 [2024-11-26 19:23:44.681806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.853 [2024-11-26 19:23:44.681824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.853 [2024-11-26 19:23:44.681842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2716120 is same with the state(6) to be set 00:22:21.853 [2024-11-26 19:23:44.681882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.853 [2024-11-26 19:23:44.681896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.853 [2024-11-26 19:23:44.681914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.853 [2024-11-26 19:23:44.681932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.853 [2024-11-26 19:23:44.681950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.681958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x276a940 is same with the state(6) to be set 00:22:21.853 [2024-11-26 19:23:44.681980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f1d30 (9): Bad file descriptor 00:22:21.853 [2024-11-26 19:23:44.681996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x271d4c0 (9): Bad file descriptor 00:22:21.853 [2024-11-26 19:23:44.682016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2712c60 (9): Bad file descriptor 00:22:21.853 [2024-11-26 19:23:44.682034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x275a8b0 (9): Bad file descriptor 00:22:21.853 [2024-11-26 19:23:44.682055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2206610 (9): Bad file descriptor 00:22:21.853 [2024-11-26 19:23:44.682077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f1300 (9): Bad file descriptor 00:22:21.853 [2024-11-26 19:23:44.682095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f21c0 (9): Bad file descriptor 00:22:21.853 [2024-11-26 19:23:44.682113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e6200 (9): Bad file descriptor 00:22:21.853 [2024-11-26 19:23:44.683772] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.853 [2024-11-26 19:23:44.684057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:21.853 [2024-11-26 19:23:44.684209] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.853 [2024-11-26 19:23:44.684261] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.853 [2024-11-26 19:23:44.685356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.853 [2024-11-26 19:23:44.685382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f1d30 with addr=10.0.0.2, port=4420 00:22:21.853 [2024-11-26 19:23:44.685396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1d30 is same with the state(6) to be set 00:22:21.853 [2024-11-26 19:23:44.685410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:21.853 [2024-11-26 19:23:44.685421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:21.853 [2024-11-26 19:23:44.685435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:21.853 [2024-11-26 19:23:44.685449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:21.853 [2024-11-26 19:23:44.685891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.685914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.685933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.685946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.685961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.685973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.685988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.685999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.686014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.686027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.686042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.686053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.686068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.686080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.686095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.686107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.686121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.686133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.686148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.686161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.686175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.686187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.853 [2024-11-26 19:23:44.686202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.853 [2024-11-26 19:23:44.686214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.686986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.686998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.854 [2024-11-26 19:23:44.687262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.854 [2024-11-26 19:23:44.687279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f8970 is same with the state(6) to be set 00:22:21.855 [2024-11-26 19:23:44.687611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.687657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.687676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x36408f0 is same with the state(6) to be set 00:22:21.855 [2024-11-26 19:23:44.687836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f1d30 (9): Bad file descriptor 00:22:21.855 [2024-11-26 19:23:44.690824] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.855 [2024-11-26 19:23:44.690862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:21.855 [2024-11-26 19:23:44.690885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:21.855 [2024-11-26 19:23:44.690911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2716120 (9): Bad file descriptor 00:22:21.855 [2024-11-26 19:23:44.690939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:21.855 [2024-11-26 19:23:44.690951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:21.855 [2024-11-26 19:23:44.690964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:21.855 [2024-11-26 19:23:44.690976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:21.855 [2024-11-26 19:23:44.691322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.855 [2024-11-26 19:23:44.691346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2712c60 with addr=10.0.0.2, port=4420 00:22:21.855 [2024-11-26 19:23:44.691359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2712c60 is same with the state(6) to be set 00:22:21.855 [2024-11-26 19:23:44.692452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.855 [2024-11-26 19:23:44.692475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2716120 with addr=10.0.0.2, port=4420 00:22:21.855 [2024-11-26 19:23:44.692488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2716120 is same with the state(6) to be set 00:22:21.855 [2024-11-26 19:23:44.692504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2712c60 (9): Bad file descriptor 00:22:21.855 [2024-11-26 19:23:44.692524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x276a940 (9): Bad file descriptor 00:22:21.855 [2024-11-26 19:23:44.692677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2716120 (9): Bad file descriptor 00:22:21.855 [2024-11-26 19:23:44.692691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:21.855 [2024-11-26 19:23:44.692699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:21.855 [2024-11-26 19:23:44.692708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:21.855 [2024-11-26 19:23:44.692716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:21.855 [2024-11-26 19:23:44.692760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.692989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.692997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.693007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.693015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.693025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.693033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.693043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.693051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.855 [2024-11-26 19:23:44.693061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.855 [2024-11-26 19:23:44.693069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.856 [2024-11-26 19:23:44.693657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.856 [2024-11-26 19:23:44.693674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.693936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.693944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f61a0 is same with the state(6) to be set 00:22:21.857 [2024-11-26 19:23:44.695154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.857 [2024-11-26 19:23:44.695597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.857 [2024-11-26 19:23:44.695607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.695985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.695993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.858 [2024-11-26 19:23:44.696321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.858 [2024-11-26 19:23:44.696331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.696339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.696347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f6350 is same with the state(6) to be set 00:22:21.859 [2024-11-26 19:23:44.697548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.697988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.697998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.859 [2024-11-26 19:23:44.698162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.859 [2024-11-26 19:23:44.698170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.698720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.698729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f7660 is same with the state(6) to be set 00:22:21.860 [2024-11-26 19:23:44.699929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.699944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.699957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.699965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.699975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.699983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.699993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.700001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.700011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.700019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.700029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.700037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.700047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.700055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.700066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.700073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.860 [2024-11-26 19:23:44.700083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.860 [2024-11-26 19:23:44.700091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.861 [2024-11-26 19:23:44.700612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.861 [2024-11-26 19:23:44.700620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.700988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.700998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.701006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.701016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.701024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.701034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.701041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.701051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.701060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.701072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.701080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.701090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.701098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.701107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f9c80 is same with the state(6) to be set 00:22:21.862 [2024-11-26 19:23:44.702332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.862 [2024-11-26 19:23:44.702619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.862 [2024-11-26 19:23:44.702639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.702990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.702998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.863 [2024-11-26 19:23:44.703167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.863 [2024-11-26 19:23:44.703175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.703341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.703348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3b0c2e0 is same with the state(6) to be set 00:22:21.864 [2024-11-26 19:23:44.704285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:21.864 [2024-11-26 19:23:44.704304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.864 [2024-11-26 19:23:44.704315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:21.864 [2024-11-26 19:23:44.704324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:21.864 [2024-11-26 19:23:44.704358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:21.864 [2024-11-26 19:23:44.704365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:21.864 [2024-11-26 19:23:44.704372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:21.864 [2024-11-26 19:23:44.704380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:21.864 [2024-11-26 19:23:44.704421] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:21.864 [2024-11-26 19:23:44.704432] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:21.864 [2024-11-26 19:23:44.704492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:21.864 [2024-11-26 19:23:44.704503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:21.864 [2024-11-26 19:23:44.704762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.864 [2024-11-26 19:23:44.704778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e6200 with addr=10.0.0.2, port=4420 00:22:21.864 [2024-11-26 19:23:44.704786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e6200 is same with the state(6) to be set 00:22:21.864 [2024-11-26 19:23:44.705028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.864 [2024-11-26 19:23:44.705038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f21c0 with addr=10.0.0.2, port=4420 00:22:21.864 [2024-11-26 19:23:44.705045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f21c0 is same with the state(6) to be set 00:22:21.864 [2024-11-26 19:23:44.705210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.864 [2024-11-26 19:23:44.705226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f1300 with addr=10.0.0.2, port=4420 00:22:21.864 [2024-11-26 19:23:44.705233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1300 is same with the state(6) to be set 00:22:21.864 [2024-11-26 19:23:44.705466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.864 [2024-11-26 19:23:44.705477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x271d4c0 with addr=10.0.0.2, port=4420 00:22:21.864 [2024-11-26 19:23:44.705483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271d4c0 is same with the state(6) to be set 00:22:21.864 [2024-11-26 19:23:44.706397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.864 [2024-11-26 19:23:44.706690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.864 [2024-11-26 19:23:44.706696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.706988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.706996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.865 [2024-11-26 19:23:44.707260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.865 [2024-11-26 19:23:44.707267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.866 [2024-11-26 19:23:44.707274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.866 [2024-11-26 19:23:44.707281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.866 [2024-11-26 19:23:44.707289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.866 [2024-11-26 19:23:44.707296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.866 [2024-11-26 19:23:44.707303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.866 [2024-11-26 19:23:44.707311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.866 [2024-11-26 19:23:44.707318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.866 [2024-11-26 19:23:44.707325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.866 [2024-11-26 19:23:44.707332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.866 [2024-11-26 19:23:44.707339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x38be8e0 is same with the state(6) to be set 00:22:21.866 [2024-11-26 19:23:44.708514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:21.866 [2024-11-26 19:23:44.708531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:21.866 [2024-11-26 19:23:44.708541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:21.866 task offset: 16384 on job bdev=Nvme2n1 fails 00:22:21.866 00:22:21.866 Latency(us) 00:22:21.866 [2024-11-26T18:23:44.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.866 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme1n1 ended in about 0.61 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme1n1 : 0.61 211.43 13.21 105.72 0.00 198892.66 18474.91 212711.13 00:22:21.866 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme2n1 ended in about 0.57 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme2n1 : 0.57 222.96 13.94 111.48 0.00 183247.40 3698.10 210713.84 00:22:21.866 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme3n1 ended in about 0.59 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme3n1 : 0.59 215.61 13.48 107.81 0.00 184583.48 14917.24 210713.84 00:22:21.866 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme4n1 ended in about 0.61 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme4n1 : 0.61 210.60 13.16 105.30 0.00 184142.67 15354.15 200727.41 00:22:21.866 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme5n1 ended in about 0.61 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme5n1 : 0.61 209.78 13.11 104.89 0.00 179790.34 31457.28 210713.84 00:22:21.866 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme6n1 ended in about 0.60 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme6n1 : 0.60 221.86 13.87 98.42 0.00 170419.93 24341.94 182751.82 00:22:21.866 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme7n1 ended in about 0.61 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme7n1 : 0.61 104.48 6.53 104.48 0.00 255411.93 26713.72 230686.72 00:22:21.866 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme8n1 ended in about 0.60 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme8n1 : 0.60 209.75 13.11 3.33 0.00 239683.78 14605.17 213709.78 00:22:21.866 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme9n1 ended in about 0.62 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme9n1 : 0.62 103.44 6.47 103.44 0.00 243269.24 31956.60 217704.35 00:22:21.866 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.866 Job: Nvme10n1 ended in about 0.61 seconds with error 00:22:21.866 Verification LBA range: start 0x0 length 0x400 00:22:21.866 Nvme10n1 : 0.61 104.11 6.51 104.11 0.00 233771.40 16976.94 232684.01 00:22:21.866 [2024-11-26T18:23:44.980Z] =================================================================================================================== 00:22:21.866 [2024-11-26T18:23:44.980Z] Total : 1814.04 113.38 948.98 0.00 201827.01 3698.10 232684.01 00:22:21.866 [2024-11-26 19:23:44.738900] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:21.866 [2024-11-26 19:23:44.738950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:21.866 [2024-11-26 19:23:44.739267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.866 [2024-11-26 19:23:44.739285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2206610 with addr=10.0.0.2, port=4420 00:22:21.866 [2024-11-26 19:23:44.739295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2206610 is same with the state(6) to be set 00:22:21.866 [2024-11-26 19:23:44.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.866 [2024-11-26 19:23:44.739473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x275a8b0 with addr=10.0.0.2, port=4420 00:22:21.866 [2024-11-26 19:23:44.739480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x275a8b0 is same with the state(6) to be set 00:22:21.866 [2024-11-26 19:23:44.739492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e6200 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.739505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f21c0 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.739514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f1300 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.739522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x271d4c0 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.739901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.866 [2024-11-26 19:23:44.739917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f1d30 with addr=10.0.0.2, port=4420 00:22:21.866 [2024-11-26 19:23:44.739925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1d30 is same with the state(6) to be set 00:22:21.866 [2024-11-26 19:23:44.740048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.866 [2024-11-26 19:23:44.740057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2712c60 with addr=10.0.0.2, port=4420 00:22:21.866 [2024-11-26 19:23:44.740065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2712c60 is same with the state(6) to be set 00:22:21.866 [2024-11-26 19:23:44.740216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.866 [2024-11-26 19:23:44.740226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2716120 with addr=10.0.0.2, port=4420 00:22:21.866 [2024-11-26 19:23:44.740233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2716120 is same with the state(6) to be set 00:22:21.866 [2024-11-26 19:23:44.740422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.866 [2024-11-26 19:23:44.740432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x276a940 with addr=10.0.0.2, port=4420 00:22:21.866 [2024-11-26 19:23:44.740443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x276a940 is same with the state(6) to be set 00:22:21.866 [2024-11-26 19:23:44.740452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2206610 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.740461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x275a8b0 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.740469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:21.866 [2024-11-26 19:23:44.740475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:21.866 [2024-11-26 19:23:44.740484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:21.866 [2024-11-26 19:23:44.740493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:21.866 [2024-11-26 19:23:44.740500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:21.866 [2024-11-26 19:23:44.740506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:21.866 [2024-11-26 19:23:44.740512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:21.866 [2024-11-26 19:23:44.740517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:21.866 [2024-11-26 19:23:44.740523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:21.866 [2024-11-26 19:23:44.740529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:21.866 [2024-11-26 19:23:44.740535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:21.866 [2024-11-26 19:23:44.740541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:21.866 [2024-11-26 19:23:44.740548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:21.866 [2024-11-26 19:23:44.740553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:21.866 [2024-11-26 19:23:44.740559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:21.866 [2024-11-26 19:23:44.740565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:21.866 [2024-11-26 19:23:44.740610] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:21.866 [2024-11-26 19:23:44.740621] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:21.866 [2024-11-26 19:23:44.740940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f1d30 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.740952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2712c60 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.740961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2716120 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.740969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x276a940 (9): Bad file descriptor 00:22:21.866 [2024-11-26 19:23:44.740977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:21.866 [2024-11-26 19:23:44.740983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:21.866 [2024-11-26 19:23:44.740990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:21.866 [2024-11-26 19:23:44.741000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:21.866 [2024-11-26 19:23:44.741007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:21.866 [2024-11-26 19:23:44.741013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:21.866 [2024-11-26 19:23:44.741019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:21.866 [2024-11-26 19:23:44.741025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:21.866 [2024-11-26 19:23:44.741058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:21.867 [2024-11-26 19:23:44.741068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:21.867 [2024-11-26 19:23:44.741076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.867 [2024-11-26 19:23:44.741084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:21.867 [2024-11-26 19:23:44.741113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:21.867 [2024-11-26 19:23:44.741119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:21.867 [2024-11-26 19:23:44.741125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:21.867 [2024-11-26 19:23:44.741131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:21.867 [2024-11-26 19:23:44.741138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:21.867 [2024-11-26 19:23:44.741143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:21.867 [2024-11-26 19:23:44.741149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:21.867 [2024-11-26 19:23:44.741155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:21.867 [2024-11-26 19:23:44.741162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:21.867 [2024-11-26 19:23:44.741167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:21.867 [2024-11-26 19:23:44.741173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:21.867 [2024-11-26 19:23:44.741179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:21.867 [2024-11-26 19:23:44.741185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:21.867 [2024-11-26 19:23:44.741190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:21.867 [2024-11-26 19:23:44.741196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:21.867 [2024-11-26 19:23:44.741202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:21.867 [2024-11-26 19:23:44.741454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.867 [2024-11-26 19:23:44.741466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x271d4c0 with addr=10.0.0.2, port=4420 00:22:21.867 [2024-11-26 19:23:44.741473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271d4c0 is same with the state(6) to be set 00:22:21.867 [2024-11-26 19:23:44.741691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.867 [2024-11-26 19:23:44.741701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f1300 with addr=10.0.0.2, port=4420 00:22:21.867 [2024-11-26 19:23:44.741711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1300 is same with the state(6) to be set 00:22:21.867 [2024-11-26 19:23:44.741880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.867 [2024-11-26 19:23:44.741891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f21c0 with addr=10.0.0.2, port=4420 00:22:21.867 [2024-11-26 19:23:44.741898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f21c0 is same with the state(6) to be set 00:22:21.867 [2024-11-26 19:23:44.742039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.867 [2024-11-26 19:23:44.742048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e6200 with addr=10.0.0.2, port=4420 00:22:21.867 [2024-11-26 19:23:44.742055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e6200 is same with the state(6) to be set 00:22:21.867 [2024-11-26 19:23:44.742084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x271d4c0 (9): Bad file descriptor 00:22:21.867 [2024-11-26 19:23:44.742093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f1300 (9): Bad file descriptor 00:22:21.867 [2024-11-26 19:23:44.742101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f21c0 (9): Bad file descriptor 00:22:21.867 [2024-11-26 19:23:44.742109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e6200 (9): Bad file descriptor 00:22:21.867 [2024-11-26 19:23:44.742134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:21.867 [2024-11-26 19:23:44.742141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:21.867 [2024-11-26 19:23:44.742148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:21.867 [2024-11-26 19:23:44.742154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:21.867 [2024-11-26 19:23:44.742161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:21.867 [2024-11-26 19:23:44.742167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:21.867 [2024-11-26 19:23:44.742173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:21.867 [2024-11-26 19:23:44.742178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:21.867 [2024-11-26 19:23:44.742184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:21.867 [2024-11-26 19:23:44.742190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:21.867 [2024-11-26 19:23:44.742196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:21.867 [2024-11-26 19:23:44.742201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:21.867 [2024-11-26 19:23:44.742207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:21.867 [2024-11-26 19:23:44.742213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:21.867 [2024-11-26 19:23:44.742219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:21.867 [2024-11-26 19:23:44.742225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:22.126 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3804852 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3804852 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3804852 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.063 rmmod nvme_tcp 00:22:23.063 rmmod nvme_fabrics 00:22:23.063 rmmod nvme_keyring 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3804574 ']' 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3804574 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3804574 ']' 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3804574 00:22:23.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3804574) - No such process 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3804574 is not found' 00:22:23.063 Process with pid 3804574 is not found 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.063 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.603 00:22:25.603 real 0m7.926s 00:22:25.603 user 0m19.874s 00:22:25.603 sys 0m1.254s 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.603 ************************************ 00:22:25.603 END TEST nvmf_shutdown_tc3 00:22:25.603 ************************************ 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.603 ************************************ 00:22:25.603 START TEST nvmf_shutdown_tc4 00:22:25.603 ************************************ 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:25.603 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:25.603 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:25.603 Found net devices under 0000:86:00.0: cvl_0_0 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:25.603 Found net devices under 0000:86:00.1: cvl_0_1 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.603 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:22:25.604 00:22:25.604 --- 10.0.0.2 ping statistics --- 00:22:25.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.604 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:22:25.604 00:22:25.604 --- 10.0.0.1 ping statistics --- 00:22:25.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.604 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3806096 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3806096 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3806096 ']' 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.604 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.604 [2024-11-26 19:23:48.690992] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:25.604 [2024-11-26 19:23:48.691040] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.863 [2024-11-26 19:23:48.768991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.863 [2024-11-26 19:23:48.809727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.863 [2024-11-26 19:23:48.809767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.863 [2024-11-26 19:23:48.809774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.863 [2024-11-26 19:23:48.809780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.863 [2024-11-26 19:23:48.809784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.863 [2024-11-26 19:23:48.811307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.863 [2024-11-26 19:23:48.811397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.863 [2024-11-26 19:23:48.811481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.863 [2024-11-26 19:23:48.811482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.863 [2024-11-26 19:23:48.956811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.863 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.123 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.123 Malloc1 00:22:26.123 [2024-11-26 19:23:49.066541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.123 Malloc2 00:22:26.123 Malloc3 00:22:26.123 Malloc4 00:22:26.123 Malloc5 00:22:26.380 Malloc6 00:22:26.380 Malloc7 00:22:26.380 Malloc8 00:22:26.380 Malloc9 00:22:26.380 Malloc10 00:22:26.380 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.380 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:26.380 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.380 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.380 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3806176 00:22:26.380 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:26.380 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:26.637 [2024-11-26 19:23:49.566122] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3806096 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3806096 ']' 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3806096 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3806096 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3806096' 00:22:31.914 killing process with pid 3806096 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3806096 00:22:31.914 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3806096 00:22:31.914 [2024-11-26 19:23:54.557586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47140 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.557640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47140 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.557648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47140 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.557655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47140 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.557661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47140 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.557668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47140 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.559573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe467a0 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.561720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56250 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.561749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56250 is same with the state(6) to be set 00:22:31.914 [2024-11-26 19:23:54.561755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56250 is same with the state(6) to be set 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 starting I/O failed: -6 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 starting I/O failed: -6 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 starting I/O failed: -6 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 starting I/O failed: -6 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 starting I/O failed: -6 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 starting I/O failed: -6 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 starting I/O failed: -6 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 starting I/O failed: -6 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.914 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 [2024-11-26 19:23:54.567814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.915 [2024-11-26 19:23:54.567872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 [2024-11-26 19:23:54.567891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44f90 is same with the state(6) to be set 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 [2024-11-26 19:23:54.568890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.915 starting I/O failed: -6 00:22:31.915 starting I/O failed: -6 00:22:31.915 starting I/O failed: -6 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 [2024-11-26 19:23:54.570061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.915 Write completed with error (sct=0, sc=8) 00:22:31.915 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 [2024-11-26 19:23:54.571858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.916 NVMe io qpair process completion error 00:22:31.916 [2024-11-26 19:23:54.574478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58910 is same with the state(6) to be set 00:22:31.916 [2024-11-26 19:23:54.574501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58910 is same with the state(6) to be set 00:22:31.916 [2024-11-26 19:23:54.574508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58910 is same with the state(6) to be set 00:22:31.916 [2024-11-26 19:23:54.574515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58910 is same with the state(6) to be set 00:22:31.916 [2024-11-26 19:23:54.574522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58910 is same with the state(6) to be set 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 [2024-11-26 19:23:54.574528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58910 is same with the state(6) to be set 00:22:31.916 [2024-11-26 19:23:54.574534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58910 is same with the state(6) to be set 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 [2024-11-26 19:23:54.575154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 starting I/O failed: -6 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.916 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.575946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with tstarting I/O failed: -6 00:22:31.917 he state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.576303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.576318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.576331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.576337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.576349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.576360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.576373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.576379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.576391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.576402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.576409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45930 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.576698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45e00 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45e00 is same with the state(6) to be set 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.576723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45e00 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.576730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45e00 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.576736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45e00 is same with the state(6) to be set 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.576956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.917 [2024-11-26 19:23:54.577032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.577052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with tstarting I/O failed: -6 00:22:31.917 he state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.577060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.577066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.577072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with the state(6) to be set 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.577079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with the state(6) to be set 00:22:31.917 [2024-11-26 19:23:54.577085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 [2024-11-26 19:23:54.577091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with the state(6) to be set 00:22:31.917 starting I/O failed: -6 00:22:31.917 [2024-11-26 19:23:54.577097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe462d0 is same with the state(6) to be set 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.917 starting I/O failed: -6 00:22:31.917 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 [2024-11-26 19:23:54.577742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with the state(6) to be set 00:22:31.918 starting I/O failed: -6 00:22:31.918 [2024-11-26 19:23:54.577766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with tWrite completed with error (sct=0, sc=8) 00:22:31.918 he state(6) to be set 00:22:31.918 starting I/O failed: -6 00:22:31.918 [2024-11-26 19:23:54.577779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with the state(6) to be set 00:22:31.918 [2024-11-26 19:23:54.577786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with the state(6) to be set 00:22:31.918 [2024-11-26 19:23:54.577792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with tWrite completed with error (sct=0, sc=8) 00:22:31.918 he state(6) to be set 00:22:31.918 starting I/O failed: -6 00:22:31.918 [2024-11-26 19:23:54.577799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with the state(6) to be set 00:22:31.918 [2024-11-26 19:23:54.577806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with the state(6) to be set 00:22:31.918 [2024-11-26 19:23:54.577811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with the state(6) to be set 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 [2024-11-26 19:23:54.577817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45460 is same with the state(6) to be set 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 [2024-11-26 19:23:54.578575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.918 NVMe io qpair process completion error 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 [2024-11-26 19:23:54.579650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 starting I/O failed: -6 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.918 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 [2024-11-26 19:23:54.580524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 [2024-11-26 19:23:54.581505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.919 Write completed with error (sct=0, sc=8) 00:22:31.919 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 [2024-11-26 19:23:54.583088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.920 NVMe io qpair process completion error 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 [2024-11-26 19:23:54.584044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 [2024-11-26 19:23:54.584832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 Write completed with error (sct=0, sc=8) 00:22:31.920 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 [2024-11-26 19:23:54.585881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 [2024-11-26 19:23:54.587935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.921 NVMe io qpair process completion error 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 [2024-11-26 19:23:54.589020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.921 starting I/O failed: -6 00:22:31.921 starting I/O failed: -6 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 Write completed with error (sct=0, sc=8) 00:22:31.921 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 [2024-11-26 19:23:54.589938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 [2024-11-26 19:23:54.590944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 starting I/O failed: -6 00:22:31.922 NVMe io qpair process completion error 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 [2024-11-26 19:23:54.592550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.922 starting I/O failed: -6 00:22:31.922 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 [2024-11-26 19:23:54.593443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 [2024-11-26 19:23:54.594456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.923 Write completed with error (sct=0, sc=8) 00:22:31.923 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 [2024-11-26 19:23:54.597187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.924 NVMe io qpair process completion error 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.924 starting I/O failed: -6 00:22:31.924 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 [2024-11-26 19:23:54.601813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 starting I/O failed: -6 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.925 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 [2024-11-26 19:23:54.602707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 [2024-11-26 19:23:54.603731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.926 starting I/O failed: -6 00:22:31.926 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 [2024-11-26 19:23:54.605837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.927 NVMe io qpair process completion error 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 [2024-11-26 19:23:54.606802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 [2024-11-26 19:23:54.607690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 [2024-11-26 19:23:54.608720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.927 Write completed with error (sct=0, sc=8) 00:22:31.927 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 [2024-11-26 19:23:54.614542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.928 NVMe io qpair process completion error 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 [2024-11-26 19:23:54.615534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 starting I/O failed: -6 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.928 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 [2024-11-26 19:23:54.616420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.929 starting I/O failed: -6 00:22:31.929 starting I/O failed: -6 00:22:31.929 starting I/O failed: -6 00:22:31.929 starting I/O failed: -6 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 [2024-11-26 19:23:54.618851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.929 Write completed with error (sct=0, sc=8) 00:22:31.929 starting I/O failed: -6 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 starting I/O failed: -6 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 starting I/O failed: -6 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 starting I/O failed: -6 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 starting I/O failed: -6 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 starting I/O failed: -6 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 starting I/O failed: -6 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 starting I/O failed: -6 00:22:31.930 [2024-11-26 19:23:54.621617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.930 NVMe io qpair process completion error 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Write completed with error (sct=0, sc=8) 00:22:31.930 Initializing NVMe Controllers 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.930 Controller IO queue size 128, less than required. 00:22:31.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:31.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:31.930 Initialization complete. Launching workers. 00:22:31.930 ======================================================== 00:22:31.930 Latency(us) 00:22:31.930 Device Information : IOPS MiB/s Average min max 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2228.15 95.74 57451.91 692.89 107393.60 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2205.68 94.78 58046.86 803.24 106186.89 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2224.12 95.57 57727.35 623.30 103800.41 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2211.62 95.03 57906.01 806.51 103826.30 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2206.53 94.81 58055.27 854.97 107389.46 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2178.56 93.61 58810.06 805.84 108477.18 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2215.86 95.21 57836.73 707.76 110981.03 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2197.00 94.40 57810.66 781.68 98147.05 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2217.76 95.29 57814.78 920.50 114999.17 00:22:31.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2192.33 94.20 57928.59 786.29 99462.94 00:22:31.930 ======================================================== 00:22:31.930 Total : 22077.61 948.65 57937.02 623.30 114999.17 00:22:31.930 00:22:31.930 [2024-11-26 19:23:54.627230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12ae0 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10890 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10bc0 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10ef0 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12900 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd11410 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd11740 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10560 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd11a70 is same with the state(6) to be set 00:22:31.930 [2024-11-26 19:23:54.627498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12720 is same with the state(6) to be set 00:22:31.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:31.930 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3806176 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3806176 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3806176 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.869 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.869 rmmod nvme_tcp 00:22:33.129 rmmod nvme_fabrics 00:22:33.129 rmmod nvme_keyring 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3806096 ']' 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3806096 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3806096 ']' 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3806096 00:22:33.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3806096) - No such process 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3806096 is not found' 00:22:33.129 Process with pid 3806096 is not found 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.129 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.037 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.037 00:22:35.037 real 0m9.797s 00:22:35.037 user 0m24.815s 00:22:35.037 sys 0m5.293s 00:22:35.037 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.037 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:35.037 ************************************ 00:22:35.037 END TEST nvmf_shutdown_tc4 00:22:35.037 ************************************ 00:22:35.037 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:35.037 00:22:35.037 real 0m41.263s 00:22:35.037 user 1m41.938s 00:22:35.037 sys 0m14.027s 00:22:35.037 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.037 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:35.037 ************************************ 00:22:35.037 END TEST nvmf_shutdown 00:22:35.037 ************************************ 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:35.296 ************************************ 00:22:35.296 START TEST nvmf_nsid 00:22:35.296 ************************************ 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:35.296 * Looking for test storage... 00:22:35.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.296 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.297 --rc genhtml_branch_coverage=1 00:22:35.297 --rc genhtml_function_coverage=1 00:22:35.297 --rc genhtml_legend=1 00:22:35.297 --rc geninfo_all_blocks=1 00:22:35.297 --rc geninfo_unexecuted_blocks=1 00:22:35.297 00:22:35.297 ' 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.297 --rc genhtml_branch_coverage=1 00:22:35.297 --rc genhtml_function_coverage=1 00:22:35.297 --rc genhtml_legend=1 00:22:35.297 --rc geninfo_all_blocks=1 00:22:35.297 --rc geninfo_unexecuted_blocks=1 00:22:35.297 00:22:35.297 ' 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.297 --rc genhtml_branch_coverage=1 00:22:35.297 --rc genhtml_function_coverage=1 00:22:35.297 --rc genhtml_legend=1 00:22:35.297 --rc geninfo_all_blocks=1 00:22:35.297 --rc geninfo_unexecuted_blocks=1 00:22:35.297 00:22:35.297 ' 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.297 --rc genhtml_branch_coverage=1 00:22:35.297 --rc genhtml_function_coverage=1 00:22:35.297 --rc genhtml_legend=1 00:22:35.297 --rc geninfo_all_blocks=1 00:22:35.297 --rc geninfo_unexecuted_blocks=1 00:22:35.297 00:22:35.297 ' 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.297 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.556 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.557 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:42.132 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:42.132 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:42.132 Found net devices under 0000:86:00.0: cvl_0_0 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:42.132 Found net devices under 0000:86:00.1: cvl_0_1 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.132 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:22:42.133 00:22:42.133 --- 10.0.0.2 ping statistics --- 00:22:42.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.133 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:22:42.133 00:22:42.133 --- 10.0.0.1 ping statistics --- 00:22:42.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.133 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3810885 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3810885 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3810885 ']' 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.133 [2024-11-26 19:24:04.432756] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:42.133 [2024-11-26 19:24:04.432808] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.133 [2024-11-26 19:24:04.513678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.133 [2024-11-26 19:24:04.554370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.133 [2024-11-26 19:24:04.554406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.133 [2024-11-26 19:24:04.554412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.133 [2024-11-26 19:24:04.554418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.133 [2024-11-26 19:24:04.554424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.133 [2024-11-26 19:24:04.554980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3810991 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=26be397f-ab16-4df4-8a2b-ad0f9ec5eaae 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=516f1103-d619-41dc-981d-613783802196 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=53a707fa-d85c-40b7-bf25-aa617b2457e9 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.133 null0 00:22:42.133 null1 00:22:42.133 [2024-11-26 19:24:04.737544] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:42.133 [2024-11-26 19:24:04.737587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810991 ] 00:22:42.133 null2 00:22:42.133 [2024-11-26 19:24:04.743555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.133 [2024-11-26 19:24:04.767744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3810991 /var/tmp/tgt2.sock 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3810991 ']' 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:42.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.133 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.133 [2024-11-26 19:24:04.810997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.133 [2024-11-26 19:24:04.851742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.133 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.133 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:42.133 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:42.392 [2024-11-26 19:24:05.390563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.392 [2024-11-26 19:24:05.406676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:42.392 nvme0n1 nvme0n2 00:22:42.392 nvme1n1 00:22:42.392 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:42.392 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:42.392 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:43.764 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 26be397f-ab16-4df4-8a2b-ad0f9ec5eaae 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=26be397fab164df48a2bad0f9ec5eaae 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 26BE397FAB164DF48A2BAD0F9EC5EAAE 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 26BE397FAB164DF48A2BAD0F9EC5EAAE == \2\6\B\E\3\9\7\F\A\B\1\6\4\D\F\4\8\A\2\B\A\D\0\F\9\E\C\5\E\A\A\E ]] 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 516f1103-d619-41dc-981d-613783802196 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=516f1103d61941dc981d613783802196 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 516F1103D61941DC981D613783802196 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 516F1103D61941DC981D613783802196 == \5\1\6\F\1\1\0\3\D\6\1\9\4\1\D\C\9\8\1\D\6\1\3\7\8\3\8\0\2\1\9\6 ]] 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 53a707fa-d85c-40b7-bf25-aa617b2457e9 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=53a707fad85c40b7bf25aa617b2457e9 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 53A707FAD85C40B7BF25AA617B2457E9 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 53A707FAD85C40B7BF25AA617B2457E9 == \5\3\A\7\0\7\F\A\D\8\5\C\4\0\B\7\B\F\2\5\A\A\6\1\7\B\2\4\5\7\E\9 ]] 00:22:44.698 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:44.956 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:44.956 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:44.956 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3810991 00:22:44.956 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3810991 ']' 00:22:44.956 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3810991 00:22:44.956 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:44.956 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.956 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3810991 00:22:44.956 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:44.956 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:44.956 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3810991' 00:22:44.956 killing process with pid 3810991 00:22:44.956 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3810991 00:22:44.956 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3810991 00:22:45.214 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.472 rmmod nvme_tcp 00:22:45.472 rmmod nvme_fabrics 00:22:45.472 rmmod nvme_keyring 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3810885 ']' 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3810885 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3810885 ']' 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3810885 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3810885 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3810885' 00:22:45.472 killing process with pid 3810885 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3810885 00:22:45.472 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3810885 00:22:45.730 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.730 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.730 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.731 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.633 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:47.633 00:22:47.633 real 0m12.470s 00:22:47.633 user 0m9.754s 00:22:47.633 sys 0m5.545s 00:22:47.633 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.633 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:47.633 ************************************ 00:22:47.633 END TEST nvmf_nsid 00:22:47.633 ************************************ 00:22:47.633 19:24:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:47.633 00:22:47.633 real 11m58.976s 00:22:47.633 user 25m43.294s 00:22:47.633 sys 3m40.042s 00:22:47.633 19:24:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.633 19:24:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:47.633 ************************************ 00:22:47.633 END TEST nvmf_target_extra 00:22:47.633 ************************************ 00:22:47.892 19:24:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:47.892 19:24:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:47.892 19:24:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.892 19:24:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:47.892 ************************************ 00:22:47.892 START TEST nvmf_host 00:22:47.892 ************************************ 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:47.892 * Looking for test storage... 00:22:47.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:47.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.892 --rc genhtml_branch_coverage=1 00:22:47.892 --rc genhtml_function_coverage=1 00:22:47.892 --rc genhtml_legend=1 00:22:47.892 --rc geninfo_all_blocks=1 00:22:47.892 --rc geninfo_unexecuted_blocks=1 00:22:47.892 00:22:47.892 ' 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:47.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.892 --rc genhtml_branch_coverage=1 00:22:47.892 --rc genhtml_function_coverage=1 00:22:47.892 --rc genhtml_legend=1 00:22:47.892 --rc geninfo_all_blocks=1 00:22:47.892 --rc geninfo_unexecuted_blocks=1 00:22:47.892 00:22:47.892 ' 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:47.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.892 --rc genhtml_branch_coverage=1 00:22:47.892 --rc genhtml_function_coverage=1 00:22:47.892 --rc genhtml_legend=1 00:22:47.892 --rc geninfo_all_blocks=1 00:22:47.892 --rc geninfo_unexecuted_blocks=1 00:22:47.892 00:22:47.892 ' 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:47.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.892 --rc genhtml_branch_coverage=1 00:22:47.892 --rc genhtml_function_coverage=1 00:22:47.892 --rc genhtml_legend=1 00:22:47.892 --rc geninfo_all_blocks=1 00:22:47.892 --rc geninfo_unexecuted_blocks=1 00:22:47.892 00:22:47.892 ' 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.892 19:24:10 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.893 19:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.152 ************************************ 00:22:48.152 START TEST nvmf_multicontroller 00:22:48.152 ************************************ 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:48.152 * Looking for test storage... 00:22:48.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.152 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:48.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.152 --rc genhtml_branch_coverage=1 00:22:48.152 --rc genhtml_function_coverage=1 00:22:48.152 --rc genhtml_legend=1 00:22:48.152 --rc geninfo_all_blocks=1 00:22:48.152 --rc geninfo_unexecuted_blocks=1 00:22:48.152 00:22:48.153 ' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:48.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.153 --rc genhtml_branch_coverage=1 00:22:48.153 --rc genhtml_function_coverage=1 00:22:48.153 --rc genhtml_legend=1 00:22:48.153 --rc geninfo_all_blocks=1 00:22:48.153 --rc geninfo_unexecuted_blocks=1 00:22:48.153 00:22:48.153 ' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:48.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.153 --rc genhtml_branch_coverage=1 00:22:48.153 --rc genhtml_function_coverage=1 00:22:48.153 --rc genhtml_legend=1 00:22:48.153 --rc geninfo_all_blocks=1 00:22:48.153 --rc geninfo_unexecuted_blocks=1 00:22:48.153 00:22:48.153 ' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:48.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.153 --rc genhtml_branch_coverage=1 00:22:48.153 --rc genhtml_function_coverage=1 00:22:48.153 --rc genhtml_legend=1 00:22:48.153 --rc geninfo_all_blocks=1 00:22:48.153 --rc geninfo_unexecuted_blocks=1 00:22:48.153 00:22:48.153 ' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:48.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.153 19:24:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:54.728 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.728 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:54.729 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:54.729 Found net devices under 0000:86:00.0: cvl_0_0 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:54.729 Found net devices under 0000:86:00.1: cvl_0_1 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.729 19:24:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:22:54.729 00:22:54.729 --- 10.0.0.2 ping statistics --- 00:22:54.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.729 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:54.729 00:22:54.729 --- 10.0.0.1 ping statistics --- 00:22:54.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.729 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3815534 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3815534 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3815534 ']' 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.729 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.729 [2024-11-26 19:24:17.240667] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:54.729 [2024-11-26 19:24:17.240721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.730 [2024-11-26 19:24:17.320766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:54.730 [2024-11-26 19:24:17.362875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.730 [2024-11-26 19:24:17.362908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.730 [2024-11-26 19:24:17.362915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.730 [2024-11-26 19:24:17.362922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.730 [2024-11-26 19:24:17.362928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.730 [2024-11-26 19:24:17.364223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.730 [2024-11-26 19:24:17.364328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.730 [2024-11-26 19:24:17.364329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 [2024-11-26 19:24:17.513793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 Malloc0 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 [2024-11-26 19:24:17.583292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 [2024-11-26 19:24:17.591185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 Malloc1 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3815719 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3815719 /var/tmp/bdevperf.sock 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3815719 ']' 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.730 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.009 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.009 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:55.009 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:55.009 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.009 19:24:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.009 NVMe0n1 00:22:55.009 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.009 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.009 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:55.009 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.009 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.009 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.009 1 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.284 request: 00:22:55.284 { 00:22:55.284 "name": "NVMe0", 00:22:55.284 "trtype": "tcp", 00:22:55.284 "traddr": "10.0.0.2", 00:22:55.284 "adrfam": "ipv4", 00:22:55.284 "trsvcid": "4420", 00:22:55.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.284 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:55.284 "hostaddr": "10.0.0.1", 00:22:55.284 "prchk_reftag": false, 00:22:55.284 "prchk_guard": false, 00:22:55.284 "hdgst": false, 00:22:55.284 "ddgst": false, 00:22:55.284 "allow_unrecognized_csi": false, 00:22:55.284 "method": "bdev_nvme_attach_controller", 00:22:55.284 "req_id": 1 00:22:55.284 } 00:22:55.284 Got JSON-RPC error response 00:22:55.284 response: 00:22:55.284 { 00:22:55.284 "code": -114, 00:22:55.284 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:55.284 } 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.284 request: 00:22:55.284 { 00:22:55.284 "name": "NVMe0", 00:22:55.284 "trtype": "tcp", 00:22:55.284 "traddr": "10.0.0.2", 00:22:55.284 "adrfam": "ipv4", 00:22:55.284 "trsvcid": "4420", 00:22:55.284 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:55.284 "hostaddr": "10.0.0.1", 00:22:55.284 "prchk_reftag": false, 00:22:55.284 "prchk_guard": false, 00:22:55.284 "hdgst": false, 00:22:55.284 "ddgst": false, 00:22:55.284 "allow_unrecognized_csi": false, 00:22:55.284 "method": "bdev_nvme_attach_controller", 00:22:55.284 "req_id": 1 00:22:55.284 } 00:22:55.284 Got JSON-RPC error response 00:22:55.284 response: 00:22:55.284 { 00:22:55.284 "code": -114, 00:22:55.284 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:55.284 } 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.284 request: 00:22:55.284 { 00:22:55.284 "name": "NVMe0", 00:22:55.284 "trtype": "tcp", 00:22:55.284 "traddr": "10.0.0.2", 00:22:55.284 "adrfam": "ipv4", 00:22:55.284 "trsvcid": "4420", 00:22:55.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.284 "hostaddr": "10.0.0.1", 00:22:55.284 "prchk_reftag": false, 00:22:55.284 "prchk_guard": false, 00:22:55.284 "hdgst": false, 00:22:55.284 "ddgst": false, 00:22:55.284 "multipath": "disable", 00:22:55.284 "allow_unrecognized_csi": false, 00:22:55.284 "method": "bdev_nvme_attach_controller", 00:22:55.284 "req_id": 1 00:22:55.284 } 00:22:55.284 Got JSON-RPC error response 00:22:55.284 response: 00:22:55.284 { 00:22:55.284 "code": -114, 00:22:55.284 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:55.284 } 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.284 request: 00:22:55.284 { 00:22:55.284 "name": "NVMe0", 00:22:55.284 "trtype": "tcp", 00:22:55.284 "traddr": "10.0.0.2", 00:22:55.284 "adrfam": "ipv4", 00:22:55.284 "trsvcid": "4420", 00:22:55.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.284 "hostaddr": "10.0.0.1", 00:22:55.284 "prchk_reftag": false, 00:22:55.284 "prchk_guard": false, 00:22:55.284 "hdgst": false, 00:22:55.284 "ddgst": false, 00:22:55.284 "multipath": "failover", 00:22:55.284 "allow_unrecognized_csi": false, 00:22:55.284 "method": "bdev_nvme_attach_controller", 00:22:55.284 "req_id": 1 00:22:55.284 } 00:22:55.284 Got JSON-RPC error response 00:22:55.284 response: 00:22:55.284 { 00:22:55.284 "code": -114, 00:22:55.284 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:55.284 } 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.284 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.285 NVMe0n1 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.285 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.569 00:22:55.569 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.569 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.569 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:55.569 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.569 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.569 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.569 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:55.569 19:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:56.957 { 00:22:56.957 "results": [ 00:22:56.957 { 00:22:56.957 "job": "NVMe0n1", 00:22:56.957 "core_mask": "0x1", 00:22:56.957 "workload": "write", 00:22:56.957 "status": "finished", 00:22:56.957 "queue_depth": 128, 00:22:56.957 "io_size": 4096, 00:22:56.957 "runtime": 1.003384, 00:22:56.957 "iops": 24850.90453903989, 00:22:56.957 "mibps": 97.07384585562457, 00:22:56.957 "io_failed": 0, 00:22:56.957 "io_timeout": 0, 00:22:56.957 "avg_latency_us": 5143.930195651551, 00:22:56.957 "min_latency_us": 2028.4952380952382, 00:22:56.957 "max_latency_us": 11796.48 00:22:56.957 } 00:22:56.957 ], 00:22:56.957 "core_count": 1 00:22:56.957 } 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3815719 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3815719 ']' 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3815719 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3815719 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3815719' 00:22:56.957 killing process with pid 3815719 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3815719 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3815719 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:56.957 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:56.957 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:56.957 [2024-11-26 19:24:17.693709] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:22:56.957 [2024-11-26 19:24:17.693761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815719 ] 00:22:56.957 [2024-11-26 19:24:17.766035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.957 [2024-11-26 19:24:17.806615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.957 [2024-11-26 19:24:18.507987] bdev.c:4906:bdev_name_add: *ERROR*: Bdev name c8c6d7ed-ec34-4a23-bfde-2f7515869ca3 already exists 00:22:56.957 [2024-11-26 19:24:18.508015] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:c8c6d7ed-ec34-4a23-bfde-2f7515869ca3 alias for bdev NVMe1n1 00:22:56.957 [2024-11-26 19:24:18.508023] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:56.957 Running I/O for 1 seconds... 00:22:56.957 24807.00 IOPS, 96.90 MiB/s 00:22:56.957 Latency(us) 00:22:56.957 [2024-11-26T18:24:20.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.957 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:56.957 NVMe0n1 : 1.00 24850.90 97.07 0.00 0.00 5143.93 2028.50 11796.48 00:22:56.957 [2024-11-26T18:24:20.072Z] =================================================================================================================== 00:22:56.958 [2024-11-26T18:24:20.072Z] Total : 24850.90 97.07 0.00 0.00 5143.93 2028.50 11796.48 00:22:56.958 Received shutdown signal, test time was about 1.000000 seconds 00:22:56.958 00:22:56.958 Latency(us) 00:22:56.958 [2024-11-26T18:24:20.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.958 [2024-11-26T18:24:20.072Z] =================================================================================================================== 00:22:56.958 [2024-11-26T18:24:20.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.958 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.958 rmmod nvme_tcp 00:22:56.958 rmmod nvme_fabrics 00:22:56.958 rmmod nvme_keyring 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3815534 ']' 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3815534 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3815534 ']' 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3815534 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.958 19:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3815534 00:22:56.958 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:56.958 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:56.958 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3815534' 00:22:56.958 killing process with pid 3815534 00:22:56.958 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3815534 00:22:56.958 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3815534 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.217 19:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.754 00:22:59.754 real 0m11.296s 00:22:59.754 user 0m12.706s 00:22:59.754 sys 0m5.209s 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.754 ************************************ 00:22:59.754 END TEST nvmf_multicontroller 00:22:59.754 ************************************ 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.754 ************************************ 00:22:59.754 START TEST nvmf_aer 00:22:59.754 ************************************ 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:59.754 * Looking for test storage... 00:22:59.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.754 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:59.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.755 --rc genhtml_branch_coverage=1 00:22:59.755 --rc genhtml_function_coverage=1 00:22:59.755 --rc genhtml_legend=1 00:22:59.755 --rc geninfo_all_blocks=1 00:22:59.755 --rc geninfo_unexecuted_blocks=1 00:22:59.755 00:22:59.755 ' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:59.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.755 --rc genhtml_branch_coverage=1 00:22:59.755 --rc genhtml_function_coverage=1 00:22:59.755 --rc genhtml_legend=1 00:22:59.755 --rc geninfo_all_blocks=1 00:22:59.755 --rc geninfo_unexecuted_blocks=1 00:22:59.755 00:22:59.755 ' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:59.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.755 --rc genhtml_branch_coverage=1 00:22:59.755 --rc genhtml_function_coverage=1 00:22:59.755 --rc genhtml_legend=1 00:22:59.755 --rc geninfo_all_blocks=1 00:22:59.755 --rc geninfo_unexecuted_blocks=1 00:22:59.755 00:22:59.755 ' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:59.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.755 --rc genhtml_branch_coverage=1 00:22:59.755 --rc genhtml_function_coverage=1 00:22:59.755 --rc genhtml_legend=1 00:22:59.755 --rc geninfo_all_blocks=1 00:22:59.755 --rc geninfo_unexecuted_blocks=1 00:22:59.755 00:22:59.755 ' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.755 19:24:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:06.326 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:06.326 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.326 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:06.327 Found net devices under 0000:86:00.0: cvl_0_0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:06.327 Found net devices under 0000:86:00.1: cvl_0_1 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:06.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:23:06.327 00:23:06.327 --- 10.0.0.2 ping statistics --- 00:23:06.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.327 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:23:06.327 00:23:06.327 --- 10.0.0.1 ping statistics --- 00:23:06.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.327 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3819510 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3819510 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3819510 ']' 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 [2024-11-26 19:24:28.567031] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:23:06.327 [2024-11-26 19:24:28.567075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.327 [2024-11-26 19:24:28.647631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.327 [2024-11-26 19:24:28.692050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.327 [2024-11-26 19:24:28.692082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.327 [2024-11-26 19:24:28.692088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.327 [2024-11-26 19:24:28.692094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.327 [2024-11-26 19:24:28.692100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.327 [2024-11-26 19:24:28.693518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.327 [2024-11-26 19:24:28.693629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.327 [2024-11-26 19:24:28.693644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.327 [2024-11-26 19:24:28.693648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 [2024-11-26 19:24:28.842920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 Malloc0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.327 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.327 [2024-11-26 19:24:28.905904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.328 [ 00:23:06.328 { 00:23:06.328 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:06.328 "subtype": "Discovery", 00:23:06.328 "listen_addresses": [], 00:23:06.328 "allow_any_host": true, 00:23:06.328 "hosts": [] 00:23:06.328 }, 00:23:06.328 { 00:23:06.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.328 "subtype": "NVMe", 00:23:06.328 "listen_addresses": [ 00:23:06.328 { 00:23:06.328 "trtype": "TCP", 00:23:06.328 "adrfam": "IPv4", 00:23:06.328 "traddr": "10.0.0.2", 00:23:06.328 "trsvcid": "4420" 00:23:06.328 } 00:23:06.328 ], 00:23:06.328 "allow_any_host": true, 00:23:06.328 "hosts": [], 00:23:06.328 "serial_number": "SPDK00000000000001", 00:23:06.328 "model_number": "SPDK bdev Controller", 00:23:06.328 "max_namespaces": 2, 00:23:06.328 "min_cntlid": 1, 00:23:06.328 "max_cntlid": 65519, 00:23:06.328 "namespaces": [ 00:23:06.328 { 00:23:06.328 "nsid": 1, 00:23:06.328 "bdev_name": "Malloc0", 00:23:06.328 "name": "Malloc0", 00:23:06.328 "nguid": "436F57E27072491D9B08556F40639EDE", 00:23:06.328 "uuid": "436f57e2-7072-491d-9b08-556f40639ede" 00:23:06.328 } 00:23:06.328 ] 00:23:06.328 } 00:23:06.328 ] 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3819739 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:06.328 19:24:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.328 Malloc1 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.328 Asynchronous Event Request test 00:23:06.328 Attaching to 10.0.0.2 00:23:06.328 Attached to 10.0.0.2 00:23:06.328 Registering asynchronous event callbacks... 00:23:06.328 Starting namespace attribute notice tests for all controllers... 00:23:06.328 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:06.328 aer_cb - Changed Namespace 00:23:06.328 Cleaning up... 00:23:06.328 [ 00:23:06.328 { 00:23:06.328 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:06.328 "subtype": "Discovery", 00:23:06.328 "listen_addresses": [], 00:23:06.328 "allow_any_host": true, 00:23:06.328 "hosts": [] 00:23:06.328 }, 00:23:06.328 { 00:23:06.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.328 "subtype": "NVMe", 00:23:06.328 "listen_addresses": [ 00:23:06.328 { 00:23:06.328 "trtype": "TCP", 00:23:06.328 "adrfam": "IPv4", 00:23:06.328 "traddr": "10.0.0.2", 00:23:06.328 "trsvcid": "4420" 00:23:06.328 } 00:23:06.328 ], 00:23:06.328 "allow_any_host": true, 00:23:06.328 "hosts": [], 00:23:06.328 "serial_number": "SPDK00000000000001", 00:23:06.328 "model_number": "SPDK bdev Controller", 00:23:06.328 "max_namespaces": 2, 00:23:06.328 "min_cntlid": 1, 00:23:06.328 "max_cntlid": 65519, 00:23:06.328 "namespaces": [ 00:23:06.328 { 00:23:06.328 "nsid": 1, 00:23:06.328 "bdev_name": "Malloc0", 00:23:06.328 "name": "Malloc0", 00:23:06.328 "nguid": "436F57E27072491D9B08556F40639EDE", 00:23:06.328 "uuid": "436f57e2-7072-491d-9b08-556f40639ede" 00:23:06.328 }, 00:23:06.328 { 00:23:06.328 "nsid": 2, 00:23:06.328 "bdev_name": "Malloc1", 00:23:06.328 "name": "Malloc1", 00:23:06.328 "nguid": "F71F01E1AE9D414CA327413E5ADAD3F7", 00:23:06.328 "uuid": "f71f01e1-ae9d-414c-a327-413e5adad3f7" 00:23:06.328 } 00:23:06.328 ] 00:23:06.328 } 00:23:06.328 ] 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3819739 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.328 rmmod nvme_tcp 00:23:06.328 rmmod nvme_fabrics 00:23:06.328 rmmod nvme_keyring 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3819510 ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3819510 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3819510 ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3819510 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3819510 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3819510' 00:23:06.328 killing process with pid 3819510 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3819510 00:23:06.328 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3819510 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.588 19:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.122 00:23:09.122 real 0m9.239s 00:23:09.122 user 0m5.236s 00:23:09.122 sys 0m4.827s 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.122 ************************************ 00:23:09.122 END TEST nvmf_aer 00:23:09.122 ************************************ 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.122 ************************************ 00:23:09.122 START TEST nvmf_async_init 00:23:09.122 ************************************ 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:09.122 * Looking for test storage... 00:23:09.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.122 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:09.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.123 --rc genhtml_branch_coverage=1 00:23:09.123 --rc genhtml_function_coverage=1 00:23:09.123 --rc genhtml_legend=1 00:23:09.123 --rc geninfo_all_blocks=1 00:23:09.123 --rc geninfo_unexecuted_blocks=1 00:23:09.123 00:23:09.123 ' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:09.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.123 --rc genhtml_branch_coverage=1 00:23:09.123 --rc genhtml_function_coverage=1 00:23:09.123 --rc genhtml_legend=1 00:23:09.123 --rc geninfo_all_blocks=1 00:23:09.123 --rc geninfo_unexecuted_blocks=1 00:23:09.123 00:23:09.123 ' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:09.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.123 --rc genhtml_branch_coverage=1 00:23:09.123 --rc genhtml_function_coverage=1 00:23:09.123 --rc genhtml_legend=1 00:23:09.123 --rc geninfo_all_blocks=1 00:23:09.123 --rc geninfo_unexecuted_blocks=1 00:23:09.123 00:23:09.123 ' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:09.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.123 --rc genhtml_branch_coverage=1 00:23:09.123 --rc genhtml_function_coverage=1 00:23:09.123 --rc genhtml_legend=1 00:23:09.123 --rc geninfo_all_blocks=1 00:23:09.123 --rc geninfo_unexecuted_blocks=1 00:23:09.123 00:23:09.123 ' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b63d76c4067c4fafb2239ca56abd5302 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.123 19:24:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.694 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:15.695 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:15.695 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:15.695 Found net devices under 0000:86:00.0: cvl_0_0 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:15.695 Found net devices under 0000:86:00.1: cvl_0_1 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:23:15.695 00:23:15.695 --- 10.0.0.2 ping statistics --- 00:23:15.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.695 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:23:15.695 00:23:15.695 --- 10.0.0.1 ping statistics --- 00:23:15.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.695 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3823294 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3823294 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:15.695 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3823294 ']' 00:23:15.696 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.696 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.696 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.696 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.696 19:24:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.696 [2024-11-26 19:24:37.904835] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:23:15.696 [2024-11-26 19:24:37.904882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.696 [2024-11-26 19:24:37.982509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.696 [2024-11-26 19:24:38.025657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.696 [2024-11-26 19:24:38.025694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.696 [2024-11-26 19:24:38.025701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.696 [2024-11-26 19:24:38.025707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.696 [2024-11-26 19:24:38.025712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.696 [2024-11-26 19:24:38.026262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.696 [2024-11-26 19:24:38.767837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.696 null0 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b63d76c4067c4fafb2239ca56abd5302 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.696 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.954 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.954 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:15.954 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.954 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.954 [2024-11-26 19:24:38.816090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.954 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.954 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:15.954 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.954 19:24:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.954 nvme0n1 00:23:15.954 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.954 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.954 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.954 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.954 [ 00:23:15.954 { 00:23:15.954 "name": "nvme0n1", 00:23:15.954 "aliases": [ 00:23:15.954 "b63d76c4-067c-4faf-b223-9ca56abd5302" 00:23:15.954 ], 00:23:15.954 "product_name": "NVMe disk", 00:23:15.954 "block_size": 512, 00:23:15.954 "num_blocks": 2097152, 00:23:15.954 "uuid": "b63d76c4-067c-4faf-b223-9ca56abd5302", 00:23:15.954 "numa_id": 1, 00:23:15.954 "assigned_rate_limits": { 00:23:15.954 "rw_ios_per_sec": 0, 00:23:15.954 "rw_mbytes_per_sec": 0, 00:23:15.954 "r_mbytes_per_sec": 0, 00:23:15.954 "w_mbytes_per_sec": 0 00:23:15.954 }, 00:23:15.954 "claimed": false, 00:23:15.954 "zoned": false, 00:23:15.954 "supported_io_types": { 00:23:15.954 "read": true, 00:23:15.954 "write": true, 00:23:15.954 "unmap": false, 00:23:15.954 "flush": true, 00:23:15.954 "reset": true, 00:23:15.954 "nvme_admin": true, 00:23:15.954 "nvme_io": true, 00:23:15.954 "nvme_io_md": false, 00:23:15.954 "write_zeroes": true, 00:23:15.954 "zcopy": false, 00:23:15.954 "get_zone_info": false, 00:23:15.954 "zone_management": false, 00:23:15.954 "zone_append": false, 00:23:15.954 "compare": true, 00:23:15.954 "compare_and_write": true, 00:23:15.954 "abort": true, 00:23:15.954 "seek_hole": false, 00:23:15.954 "seek_data": false, 00:23:15.954 "copy": true, 00:23:15.954 "nvme_iov_md": false 00:23:15.954 }, 00:23:16.216 "memory_domains": [ 00:23:16.216 { 00:23:16.216 "dma_device_id": "system", 00:23:16.216 "dma_device_type": 1 00:23:16.216 } 00:23:16.216 ], 00:23:16.216 "driver_specific": { 00:23:16.216 "nvme": [ 00:23:16.216 { 00:23:16.216 "trid": { 00:23:16.216 "trtype": "TCP", 00:23:16.216 "adrfam": "IPv4", 00:23:16.216 "traddr": "10.0.0.2", 00:23:16.216 "trsvcid": "4420", 00:23:16.216 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.216 }, 00:23:16.216 "ctrlr_data": { 00:23:16.216 "cntlid": 1, 00:23:16.216 "vendor_id": "0x8086", 00:23:16.216 "model_number": "SPDK bdev Controller", 00:23:16.216 "serial_number": "00000000000000000000", 00:23:16.216 "firmware_revision": "25.01", 00:23:16.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.216 "oacs": { 00:23:16.217 "security": 0, 00:23:16.217 "format": 0, 00:23:16.217 "firmware": 0, 00:23:16.217 "ns_manage": 0 00:23:16.217 }, 00:23:16.217 "multi_ctrlr": true, 00:23:16.217 "ana_reporting": false 00:23:16.217 }, 00:23:16.217 "vs": { 00:23:16.217 "nvme_version": "1.3" 00:23:16.217 }, 00:23:16.217 "ns_data": { 00:23:16.217 "id": 1, 00:23:16.217 "can_share": true 00:23:16.217 } 00:23:16.217 } 00:23:16.217 ], 00:23:16.217 "mp_policy": "active_passive" 00:23:16.217 } 00:23:16.217 } 00:23:16.217 ] 00:23:16.217 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.217 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:16.217 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.217 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.217 [2024-11-26 19:24:39.076625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.217 [2024-11-26 19:24:39.076708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd19900 (9): Bad file descriptor 00:23:16.217 [2024-11-26 19:24:39.208742] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:16.217 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.217 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:16.217 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.217 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.217 [ 00:23:16.217 { 00:23:16.217 "name": "nvme0n1", 00:23:16.217 "aliases": [ 00:23:16.217 "b63d76c4-067c-4faf-b223-9ca56abd5302" 00:23:16.217 ], 00:23:16.217 "product_name": "NVMe disk", 00:23:16.217 "block_size": 512, 00:23:16.217 "num_blocks": 2097152, 00:23:16.217 "uuid": "b63d76c4-067c-4faf-b223-9ca56abd5302", 00:23:16.217 "numa_id": 1, 00:23:16.217 "assigned_rate_limits": { 00:23:16.217 "rw_ios_per_sec": 0, 00:23:16.217 "rw_mbytes_per_sec": 0, 00:23:16.217 "r_mbytes_per_sec": 0, 00:23:16.217 "w_mbytes_per_sec": 0 00:23:16.217 }, 00:23:16.217 "claimed": false, 00:23:16.217 "zoned": false, 00:23:16.217 "supported_io_types": { 00:23:16.217 "read": true, 00:23:16.217 "write": true, 00:23:16.217 "unmap": false, 00:23:16.217 "flush": true, 00:23:16.217 "reset": true, 00:23:16.217 "nvme_admin": true, 00:23:16.217 "nvme_io": true, 00:23:16.217 "nvme_io_md": false, 00:23:16.217 "write_zeroes": true, 00:23:16.218 "zcopy": false, 00:23:16.218 "get_zone_info": false, 00:23:16.218 "zone_management": false, 00:23:16.218 "zone_append": false, 00:23:16.218 "compare": true, 00:23:16.218 "compare_and_write": true, 00:23:16.218 "abort": true, 00:23:16.218 "seek_hole": false, 00:23:16.218 "seek_data": false, 00:23:16.218 "copy": true, 00:23:16.218 "nvme_iov_md": false 00:23:16.218 }, 00:23:16.218 "memory_domains": [ 00:23:16.218 { 00:23:16.218 "dma_device_id": "system", 00:23:16.218 "dma_device_type": 1 00:23:16.218 } 00:23:16.218 ], 00:23:16.218 "driver_specific": { 00:23:16.218 "nvme": [ 00:23:16.218 { 00:23:16.218 "trid": { 00:23:16.218 "trtype": "TCP", 00:23:16.218 "adrfam": "IPv4", 00:23:16.218 "traddr": "10.0.0.2", 00:23:16.218 "trsvcid": "4420", 00:23:16.218 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.218 }, 00:23:16.218 "ctrlr_data": { 00:23:16.218 "cntlid": 2, 00:23:16.218 "vendor_id": "0x8086", 00:23:16.218 "model_number": "SPDK bdev Controller", 00:23:16.218 "serial_number": "00000000000000000000", 00:23:16.218 "firmware_revision": "25.01", 00:23:16.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.218 "oacs": { 00:23:16.218 "security": 0, 00:23:16.218 "format": 0, 00:23:16.218 "firmware": 0, 00:23:16.218 "ns_manage": 0 00:23:16.218 }, 00:23:16.218 "multi_ctrlr": true, 00:23:16.218 "ana_reporting": false 00:23:16.218 }, 00:23:16.218 "vs": { 00:23:16.218 "nvme_version": "1.3" 00:23:16.218 }, 00:23:16.218 "ns_data": { 00:23:16.218 "id": 1, 00:23:16.218 "can_share": true 00:23:16.218 } 00:23:16.218 } 00:23:16.218 ], 00:23:16.218 "mp_policy": "active_passive" 00:23:16.218 } 00:23:16.218 } 00:23:16.218 ] 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GnYiag9OCv 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:16.218 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GnYiag9OCv 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.GnYiag9OCv 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 [2024-11-26 19:24:39.281226] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.219 [2024-11-26 19:24:39.281335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.219 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.220 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.220 [2024-11-26 19:24:39.301295] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.480 nvme0n1 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.480 [ 00:23:16.480 { 00:23:16.480 "name": "nvme0n1", 00:23:16.480 "aliases": [ 00:23:16.480 "b63d76c4-067c-4faf-b223-9ca56abd5302" 00:23:16.480 ], 00:23:16.480 "product_name": "NVMe disk", 00:23:16.480 "block_size": 512, 00:23:16.480 "num_blocks": 2097152, 00:23:16.480 "uuid": "b63d76c4-067c-4faf-b223-9ca56abd5302", 00:23:16.480 "numa_id": 1, 00:23:16.480 "assigned_rate_limits": { 00:23:16.480 "rw_ios_per_sec": 0, 00:23:16.480 "rw_mbytes_per_sec": 0, 00:23:16.480 "r_mbytes_per_sec": 0, 00:23:16.480 "w_mbytes_per_sec": 0 00:23:16.480 }, 00:23:16.480 "claimed": false, 00:23:16.480 "zoned": false, 00:23:16.480 "supported_io_types": { 00:23:16.480 "read": true, 00:23:16.480 "write": true, 00:23:16.480 "unmap": false, 00:23:16.480 "flush": true, 00:23:16.480 "reset": true, 00:23:16.480 "nvme_admin": true, 00:23:16.480 "nvme_io": true, 00:23:16.480 "nvme_io_md": false, 00:23:16.480 "write_zeroes": true, 00:23:16.480 "zcopy": false, 00:23:16.480 "get_zone_info": false, 00:23:16.480 "zone_management": false, 00:23:16.480 "zone_append": false, 00:23:16.480 "compare": true, 00:23:16.480 "compare_and_write": true, 00:23:16.480 "abort": true, 00:23:16.480 "seek_hole": false, 00:23:16.480 "seek_data": false, 00:23:16.480 "copy": true, 00:23:16.480 "nvme_iov_md": false 00:23:16.480 }, 00:23:16.480 "memory_domains": [ 00:23:16.480 { 00:23:16.480 "dma_device_id": "system", 00:23:16.480 "dma_device_type": 1 00:23:16.480 } 00:23:16.480 ], 00:23:16.480 "driver_specific": { 00:23:16.480 "nvme": [ 00:23:16.480 { 00:23:16.480 "trid": { 00:23:16.480 "trtype": "TCP", 00:23:16.480 "adrfam": "IPv4", 00:23:16.480 "traddr": "10.0.0.2", 00:23:16.480 "trsvcid": "4421", 00:23:16.480 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.480 }, 00:23:16.480 "ctrlr_data": { 00:23:16.480 "cntlid": 3, 00:23:16.480 "vendor_id": "0x8086", 00:23:16.480 "model_number": "SPDK bdev Controller", 00:23:16.480 "serial_number": "00000000000000000000", 00:23:16.480 "firmware_revision": "25.01", 00:23:16.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.480 "oacs": { 00:23:16.480 "security": 0, 00:23:16.480 "format": 0, 00:23:16.480 "firmware": 0, 00:23:16.480 "ns_manage": 0 00:23:16.480 }, 00:23:16.480 "multi_ctrlr": true, 00:23:16.480 "ana_reporting": false 00:23:16.480 }, 00:23:16.480 "vs": { 00:23:16.480 "nvme_version": "1.3" 00:23:16.480 }, 00:23:16.480 "ns_data": { 00:23:16.480 "id": 1, 00:23:16.480 "can_share": true 00:23:16.480 } 00:23:16.480 } 00:23:16.480 ], 00:23:16.480 "mp_policy": "active_passive" 00:23:16.480 } 00:23:16.480 } 00:23:16.480 ] 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.GnYiag9OCv 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.480 rmmod nvme_tcp 00:23:16.480 rmmod nvme_fabrics 00:23:16.480 rmmod nvme_keyring 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3823294 ']' 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3823294 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3823294 ']' 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3823294 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3823294 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.480 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3823294' 00:23:16.480 killing process with pid 3823294 00:23:16.481 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3823294 00:23:16.481 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3823294 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.739 19:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.643 19:24:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.643 00:23:18.643 real 0m10.042s 00:23:18.643 user 0m3.843s 00:23:18.643 sys 0m4.802s 00:23:18.643 19:24:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.643 19:24:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.643 ************************************ 00:23:18.643 END TEST nvmf_async_init 00:23:18.643 ************************************ 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.902 ************************************ 00:23:18.902 START TEST dma 00:23:18.902 ************************************ 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:18.902 * Looking for test storage... 00:23:18.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:18.902 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.903 --rc genhtml_branch_coverage=1 00:23:18.903 --rc genhtml_function_coverage=1 00:23:18.903 --rc genhtml_legend=1 00:23:18.903 --rc geninfo_all_blocks=1 00:23:18.903 --rc geninfo_unexecuted_blocks=1 00:23:18.903 00:23:18.903 ' 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.903 --rc genhtml_branch_coverage=1 00:23:18.903 --rc genhtml_function_coverage=1 00:23:18.903 --rc genhtml_legend=1 00:23:18.903 --rc geninfo_all_blocks=1 00:23:18.903 --rc geninfo_unexecuted_blocks=1 00:23:18.903 00:23:18.903 ' 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.903 --rc genhtml_branch_coverage=1 00:23:18.903 --rc genhtml_function_coverage=1 00:23:18.903 --rc genhtml_legend=1 00:23:18.903 --rc geninfo_all_blocks=1 00:23:18.903 --rc geninfo_unexecuted_blocks=1 00:23:18.903 00:23:18.903 ' 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.903 --rc genhtml_branch_coverage=1 00:23:18.903 --rc genhtml_function_coverage=1 00:23:18.903 --rc genhtml_legend=1 00:23:18.903 --rc geninfo_all_blocks=1 00:23:18.903 --rc geninfo_unexecuted_blocks=1 00:23:18.903 00:23:18.903 ' 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.903 19:24:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.903 19:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:19.162 00:23:19.162 real 0m0.207s 00:23:19.162 user 0m0.131s 00:23:19.162 sys 0m0.089s 00:23:19.162 19:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:19.163 ************************************ 00:23:19.163 END TEST dma 00:23:19.163 ************************************ 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.163 ************************************ 00:23:19.163 START TEST nvmf_identify 00:23:19.163 ************************************ 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:19.163 * Looking for test storage... 00:23:19.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:19.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.163 --rc genhtml_branch_coverage=1 00:23:19.163 --rc genhtml_function_coverage=1 00:23:19.163 --rc genhtml_legend=1 00:23:19.163 --rc geninfo_all_blocks=1 00:23:19.163 --rc geninfo_unexecuted_blocks=1 00:23:19.163 00:23:19.163 ' 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:19.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.163 --rc genhtml_branch_coverage=1 00:23:19.163 --rc genhtml_function_coverage=1 00:23:19.163 --rc genhtml_legend=1 00:23:19.163 --rc geninfo_all_blocks=1 00:23:19.163 --rc geninfo_unexecuted_blocks=1 00:23:19.163 00:23:19.163 ' 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:19.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.163 --rc genhtml_branch_coverage=1 00:23:19.163 --rc genhtml_function_coverage=1 00:23:19.163 --rc genhtml_legend=1 00:23:19.163 --rc geninfo_all_blocks=1 00:23:19.163 --rc geninfo_unexecuted_blocks=1 00:23:19.163 00:23:19.163 ' 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:19.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.163 --rc genhtml_branch_coverage=1 00:23:19.163 --rc genhtml_function_coverage=1 00:23:19.163 --rc genhtml_legend=1 00:23:19.163 --rc geninfo_all_blocks=1 00:23:19.163 --rc geninfo_unexecuted_blocks=1 00:23:19.163 00:23:19.163 ' 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.163 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.422 19:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:25.980 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:25.980 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:25.980 Found net devices under 0000:86:00.0: cvl_0_0 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:25.980 Found net devices under 0000:86:00.1: cvl_0_1 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.980 19:24:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.980 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.980 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.980 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.980 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.980 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.980 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:23:25.981 00:23:25.981 --- 10.0.0.2 ping statistics --- 00:23:25.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.981 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:23:25.981 00:23:25.981 --- 10.0.0.1 ping statistics --- 00:23:25.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.981 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3827125 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3827125 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3827125 ']' 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.981 19:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.981 [2024-11-26 19:24:48.295194] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:23:25.981 [2024-11-26 19:24:48.295238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.981 [2024-11-26 19:24:48.375502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.981 [2024-11-26 19:24:48.417625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.981 [2024-11-26 19:24:48.417660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.981 [2024-11-26 19:24:48.417667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.981 [2024-11-26 19:24:48.417676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.981 [2024-11-26 19:24:48.417681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.981 [2024-11-26 19:24:48.419246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.981 [2024-11-26 19:24:48.419359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.981 [2024-11-26 19:24:48.419482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.981 [2024-11-26 19:24:48.419482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.240 [2024-11-26 19:24:49.147190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.240 Malloc0 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.240 [2024-11-26 19:24:49.249135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.240 [ 00:23:26.240 { 00:23:26.240 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:26.240 "subtype": "Discovery", 00:23:26.240 "listen_addresses": [ 00:23:26.240 { 00:23:26.240 "trtype": "TCP", 00:23:26.240 "adrfam": "IPv4", 00:23:26.240 "traddr": "10.0.0.2", 00:23:26.240 "trsvcid": "4420" 00:23:26.240 } 00:23:26.240 ], 00:23:26.240 "allow_any_host": true, 00:23:26.240 "hosts": [] 00:23:26.240 }, 00:23:26.240 { 00:23:26.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.240 "subtype": "NVMe", 00:23:26.240 "listen_addresses": [ 00:23:26.240 { 00:23:26.240 "trtype": "TCP", 00:23:26.240 "adrfam": "IPv4", 00:23:26.240 "traddr": "10.0.0.2", 00:23:26.240 "trsvcid": "4420" 00:23:26.240 } 00:23:26.240 ], 00:23:26.240 "allow_any_host": true, 00:23:26.240 "hosts": [], 00:23:26.240 "serial_number": "SPDK00000000000001", 00:23:26.240 "model_number": "SPDK bdev Controller", 00:23:26.240 "max_namespaces": 32, 00:23:26.240 "min_cntlid": 1, 00:23:26.240 "max_cntlid": 65519, 00:23:26.240 "namespaces": [ 00:23:26.240 { 00:23:26.240 "nsid": 1, 00:23:26.240 "bdev_name": "Malloc0", 00:23:26.240 "name": "Malloc0", 00:23:26.240 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:26.240 "eui64": "ABCDEF0123456789", 00:23:26.240 "uuid": "faa0ea06-17d9-47b1-ae05-b20eb09cddd4" 00:23:26.240 } 00:23:26.240 ] 00:23:26.240 } 00:23:26.240 ] 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.240 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:26.240 [2024-11-26 19:24:49.302305] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:23:26.240 [2024-11-26 19:24:49.302354] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827372 ] 00:23:26.240 [2024-11-26 19:24:49.342197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:26.240 [2024-11-26 19:24:49.342245] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:26.240 [2024-11-26 19:24:49.342250] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:26.240 [2024-11-26 19:24:49.342267] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:26.240 [2024-11-26 19:24:49.342275] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:26.240 [2024-11-26 19:24:49.345992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:26.240 [2024-11-26 19:24:49.346027] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2178690 0 00:23:26.505 [2024-11-26 19:24:49.353688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:26.505 [2024-11-26 19:24:49.353709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:26.505 [2024-11-26 19:24:49.353715] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:26.505 [2024-11-26 19:24:49.353718] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:26.505 [2024-11-26 19:24:49.353757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.353763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.353767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.505 [2024-11-26 19:24:49.353791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:26.505 [2024-11-26 19:24:49.353811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.505 [2024-11-26 19:24:49.361678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.505 [2024-11-26 19:24:49.361691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.505 [2024-11-26 19:24:49.361695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.361699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.505 [2024-11-26 19:24:49.361711] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:26.505 [2024-11-26 19:24:49.361719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:26.505 [2024-11-26 19:24:49.361724] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:26.505 [2024-11-26 19:24:49.361741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.361745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.361749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.505 [2024-11-26 19:24:49.361756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.505 [2024-11-26 19:24:49.361771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.505 [2024-11-26 19:24:49.361934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.505 [2024-11-26 19:24:49.361940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.505 [2024-11-26 19:24:49.361943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.361947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.505 [2024-11-26 19:24:49.361954] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:26.505 [2024-11-26 19:24:49.361964] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:26.505 [2024-11-26 19:24:49.361971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.361974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.361977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.505 [2024-11-26 19:24:49.361982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.505 [2024-11-26 19:24:49.361993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.505 [2024-11-26 19:24:49.362058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.505 [2024-11-26 19:24:49.362064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.505 [2024-11-26 19:24:49.362067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.505 [2024-11-26 19:24:49.362076] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:26.505 [2024-11-26 19:24:49.362082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:26.505 [2024-11-26 19:24:49.362088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.505 [2024-11-26 19:24:49.362100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.505 [2024-11-26 19:24:49.362110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.505 [2024-11-26 19:24:49.362173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.505 [2024-11-26 19:24:49.362179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.505 [2024-11-26 19:24:49.362182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.505 [2024-11-26 19:24:49.362190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:26.505 [2024-11-26 19:24:49.362198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.505 [2024-11-26 19:24:49.362210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.505 [2024-11-26 19:24:49.362219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.505 [2024-11-26 19:24:49.362293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.505 [2024-11-26 19:24:49.362298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.505 [2024-11-26 19:24:49.362301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.505 [2024-11-26 19:24:49.362308] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:26.505 [2024-11-26 19:24:49.362313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:26.505 [2024-11-26 19:24:49.362322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:26.505 [2024-11-26 19:24:49.362430] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:26.505 [2024-11-26 19:24:49.362434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:26.505 [2024-11-26 19:24:49.362442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.505 [2024-11-26 19:24:49.362454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.505 [2024-11-26 19:24:49.362464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.505 [2024-11-26 19:24:49.362528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.505 [2024-11-26 19:24:49.362534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.505 [2024-11-26 19:24:49.362537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.505 [2024-11-26 19:24:49.362544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:26.505 [2024-11-26 19:24:49.362552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.505 [2024-11-26 19:24:49.362559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.505 [2024-11-26 19:24:49.362565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.506 [2024-11-26 19:24:49.362574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.506 [2024-11-26 19:24:49.362635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.506 [2024-11-26 19:24:49.362641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.506 [2024-11-26 19:24:49.362644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.506 [2024-11-26 19:24:49.362651] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:26.506 [2024-11-26 19:24:49.362655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:26.506 [2024-11-26 19:24:49.362662] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:26.506 [2024-11-26 19:24:49.362673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:26.506 [2024-11-26 19:24:49.362682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.506 [2024-11-26 19:24:49.362691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.506 [2024-11-26 19:24:49.362700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.506 [2024-11-26 19:24:49.362800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.506 [2024-11-26 19:24:49.362810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.506 [2024-11-26 19:24:49.362813] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362817] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2178690): datao=0, datal=4096, cccid=0 00:23:26.506 [2024-11-26 19:24:49.362821] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21da100) on tqpair(0x2178690): expected_datao=0, payload_size=4096 00:23:26.506 [2024-11-26 19:24:49.362825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362831] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362835] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.506 [2024-11-26 19:24:49.362856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.506 [2024-11-26 19:24:49.362858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.506 [2024-11-26 19:24:49.362869] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:26.506 [2024-11-26 19:24:49.362873] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:26.506 [2024-11-26 19:24:49.362877] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:26.506 [2024-11-26 19:24:49.362881] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:26.506 [2024-11-26 19:24:49.362885] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:26.506 [2024-11-26 19:24:49.362889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:26.506 [2024-11-26 19:24:49.362898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:26.506 [2024-11-26 19:24:49.362904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.362910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.506 [2024-11-26 19:24:49.362916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.506 [2024-11-26 19:24:49.362927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.506 [2024-11-26 19:24:49.362996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.506 [2024-11-26 19:24:49.363002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.506 [2024-11-26 19:24:49.363005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.506 [2024-11-26 19:24:49.363014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2178690) 00:23:26.506 [2024-11-26 19:24:49.363025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.506 [2024-11-26 19:24:49.363031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2178690) 00:23:26.506 [2024-11-26 19:24:49.363043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.506 [2024-11-26 19:24:49.363048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2178690) 00:23:26.506 [2024-11-26 19:24:49.363059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.506 [2024-11-26 19:24:49.363064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.506 [2024-11-26 19:24:49.363075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.506 [2024-11-26 19:24:49.363079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:26.506 [2024-11-26 19:24:49.363090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:26.506 [2024-11-26 19:24:49.363096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2178690) 00:23:26.506 [2024-11-26 19:24:49.363105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.506 [2024-11-26 19:24:49.363116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da100, cid 0, qid 0 00:23:26.506 [2024-11-26 19:24:49.363120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da280, cid 1, qid 0 00:23:26.506 [2024-11-26 19:24:49.363124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da400, cid 2, qid 0 00:23:26.506 [2024-11-26 19:24:49.363128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.506 [2024-11-26 19:24:49.363132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da700, cid 4, qid 0 00:23:26.506 [2024-11-26 19:24:49.363229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.506 [2024-11-26 19:24:49.363234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.506 [2024-11-26 19:24:49.363237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da700) on tqpair=0x2178690 00:23:26.506 [2024-11-26 19:24:49.363245] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:26.506 [2024-11-26 19:24:49.363250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:26.506 [2024-11-26 19:24:49.363259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2178690) 00:23:26.506 [2024-11-26 19:24:49.363268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.506 [2024-11-26 19:24:49.363277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da700, cid 4, qid 0 00:23:26.506 [2024-11-26 19:24:49.363353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.506 [2024-11-26 19:24:49.363359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.506 [2024-11-26 19:24:49.363362] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.506 [2024-11-26 19:24:49.363365] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2178690): datao=0, datal=4096, cccid=4 00:23:26.507 [2024-11-26 19:24:49.363371] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21da700) on tqpair(0x2178690): expected_datao=0, payload_size=4096 00:23:26.507 [2024-11-26 19:24:49.363375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.363387] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.363391] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.405677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.507 [2024-11-26 19:24:49.405688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.507 [2024-11-26 19:24:49.405691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.405694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da700) on tqpair=0x2178690 00:23:26.507 [2024-11-26 19:24:49.405708] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:26.507 [2024-11-26 19:24:49.405732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.405736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2178690) 00:23:26.507 [2024-11-26 19:24:49.405743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.507 [2024-11-26 19:24:49.405749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.405752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.405755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2178690) 00:23:26.507 [2024-11-26 19:24:49.405760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.507 [2024-11-26 19:24:49.405782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da700, cid 4, qid 0 00:23:26.507 [2024-11-26 19:24:49.405787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da880, cid 5, qid 0 00:23:26.507 [2024-11-26 19:24:49.405977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.507 [2024-11-26 19:24:49.405983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.507 [2024-11-26 19:24:49.405986] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.405990] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2178690): datao=0, datal=1024, cccid=4 00:23:26.507 [2024-11-26 19:24:49.405994] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21da700) on tqpair(0x2178690): expected_datao=0, payload_size=1024 00:23:26.507 [2024-11-26 19:24:49.405997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.406003] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.406006] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.406011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.507 [2024-11-26 19:24:49.406016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.507 [2024-11-26 19:24:49.406019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.406023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da880) on tqpair=0x2178690 00:23:26.507 [2024-11-26 19:24:49.447827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.507 [2024-11-26 19:24:49.447839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.507 [2024-11-26 19:24:49.447843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.447846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da700) on tqpair=0x2178690 00:23:26.507 [2024-11-26 19:24:49.447858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.447861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2178690) 00:23:26.507 [2024-11-26 19:24:49.447868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.507 [2024-11-26 19:24:49.447886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da700, cid 4, qid 0 00:23:26.507 [2024-11-26 19:24:49.447985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.507 [2024-11-26 19:24:49.447991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.507 [2024-11-26 19:24:49.447995] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.447998] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2178690): datao=0, datal=3072, cccid=4 00:23:26.507 [2024-11-26 19:24:49.448001] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21da700) on tqpair(0x2178690): expected_datao=0, payload_size=3072 00:23:26.507 [2024-11-26 19:24:49.448005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.448011] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.448014] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.448024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.507 [2024-11-26 19:24:49.448030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.507 [2024-11-26 19:24:49.448033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.448036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da700) on tqpair=0x2178690 00:23:26.507 [2024-11-26 19:24:49.448044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.448047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2178690) 00:23:26.507 [2024-11-26 19:24:49.448053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.507 [2024-11-26 19:24:49.448065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da700, cid 4, qid 0 00:23:26.507 [2024-11-26 19:24:49.448142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.507 [2024-11-26 19:24:49.448148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.507 [2024-11-26 19:24:49.448151] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.448154] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2178690): datao=0, datal=8, cccid=4 00:23:26.507 [2024-11-26 19:24:49.448158] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21da700) on tqpair(0x2178690): expected_datao=0, payload_size=8 00:23:26.507 [2024-11-26 19:24:49.448162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.448167] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.448170] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.489821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.507 [2024-11-26 19:24:49.489830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.507 [2024-11-26 19:24:49.489834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.507 [2024-11-26 19:24:49.489837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da700) on tqpair=0x2178690 00:23:26.507 ===================================================== 00:23:26.507 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:26.507 ===================================================== 00:23:26.507 Controller Capabilities/Features 00:23:26.507 ================================ 00:23:26.507 Vendor ID: 0000 00:23:26.507 Subsystem Vendor ID: 0000 00:23:26.507 Serial Number: .................... 00:23:26.507 Model Number: ........................................ 00:23:26.507 Firmware Version: 25.01 00:23:26.507 Recommended Arb Burst: 0 00:23:26.507 IEEE OUI Identifier: 00 00 00 00:23:26.507 Multi-path I/O 00:23:26.507 May have multiple subsystem ports: No 00:23:26.507 May have multiple controllers: No 00:23:26.507 Associated with SR-IOV VF: No 00:23:26.507 Max Data Transfer Size: 131072 00:23:26.507 Max Number of Namespaces: 0 00:23:26.507 Max Number of I/O Queues: 1024 00:23:26.507 NVMe Specification Version (VS): 1.3 00:23:26.507 NVMe Specification Version (Identify): 1.3 00:23:26.507 Maximum Queue Entries: 128 00:23:26.507 Contiguous Queues Required: Yes 00:23:26.507 Arbitration Mechanisms Supported 00:23:26.507 Weighted Round Robin: Not Supported 00:23:26.507 Vendor Specific: Not Supported 00:23:26.507 Reset Timeout: 15000 ms 00:23:26.507 Doorbell Stride: 4 bytes 00:23:26.507 NVM Subsystem Reset: Not Supported 00:23:26.507 Command Sets Supported 00:23:26.507 NVM Command Set: Supported 00:23:26.507 Boot Partition: Not Supported 00:23:26.507 Memory Page Size Minimum: 4096 bytes 00:23:26.507 Memory Page Size Maximum: 4096 bytes 00:23:26.507 Persistent Memory Region: Not Supported 00:23:26.507 Optional Asynchronous Events Supported 00:23:26.507 Namespace Attribute Notices: Not Supported 00:23:26.507 Firmware Activation Notices: Not Supported 00:23:26.507 ANA Change Notices: Not Supported 00:23:26.507 PLE Aggregate Log Change Notices: Not Supported 00:23:26.507 LBA Status Info Alert Notices: Not Supported 00:23:26.507 EGE Aggregate Log Change Notices: Not Supported 00:23:26.507 Normal NVM Subsystem Shutdown event: Not Supported 00:23:26.507 Zone Descriptor Change Notices: Not Supported 00:23:26.507 Discovery Log Change Notices: Supported 00:23:26.507 Controller Attributes 00:23:26.507 128-bit Host Identifier: Not Supported 00:23:26.507 Non-Operational Permissive Mode: Not Supported 00:23:26.507 NVM Sets: Not Supported 00:23:26.507 Read Recovery Levels: Not Supported 00:23:26.507 Endurance Groups: Not Supported 00:23:26.507 Predictable Latency Mode: Not Supported 00:23:26.507 Traffic Based Keep ALive: Not Supported 00:23:26.507 Namespace Granularity: Not Supported 00:23:26.507 SQ Associations: Not Supported 00:23:26.508 UUID List: Not Supported 00:23:26.508 Multi-Domain Subsystem: Not Supported 00:23:26.508 Fixed Capacity Management: Not Supported 00:23:26.508 Variable Capacity Management: Not Supported 00:23:26.508 Delete Endurance Group: Not Supported 00:23:26.508 Delete NVM Set: Not Supported 00:23:26.508 Extended LBA Formats Supported: Not Supported 00:23:26.508 Flexible Data Placement Supported: Not Supported 00:23:26.508 00:23:26.508 Controller Memory Buffer Support 00:23:26.508 ================================ 00:23:26.508 Supported: No 00:23:26.508 00:23:26.508 Persistent Memory Region Support 00:23:26.508 ================================ 00:23:26.508 Supported: No 00:23:26.508 00:23:26.508 Admin Command Set Attributes 00:23:26.508 ============================ 00:23:26.508 Security Send/Receive: Not Supported 00:23:26.508 Format NVM: Not Supported 00:23:26.508 Firmware Activate/Download: Not Supported 00:23:26.508 Namespace Management: Not Supported 00:23:26.508 Device Self-Test: Not Supported 00:23:26.508 Directives: Not Supported 00:23:26.508 NVMe-MI: Not Supported 00:23:26.508 Virtualization Management: Not Supported 00:23:26.508 Doorbell Buffer Config: Not Supported 00:23:26.508 Get LBA Status Capability: Not Supported 00:23:26.508 Command & Feature Lockdown Capability: Not Supported 00:23:26.508 Abort Command Limit: 1 00:23:26.508 Async Event Request Limit: 4 00:23:26.508 Number of Firmware Slots: N/A 00:23:26.508 Firmware Slot 1 Read-Only: N/A 00:23:26.508 Firmware Activation Without Reset: N/A 00:23:26.508 Multiple Update Detection Support: N/A 00:23:26.508 Firmware Update Granularity: No Information Provided 00:23:26.508 Per-Namespace SMART Log: No 00:23:26.508 Asymmetric Namespace Access Log Page: Not Supported 00:23:26.508 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:26.508 Command Effects Log Page: Not Supported 00:23:26.508 Get Log Page Extended Data: Supported 00:23:26.508 Telemetry Log Pages: Not Supported 00:23:26.508 Persistent Event Log Pages: Not Supported 00:23:26.508 Supported Log Pages Log Page: May Support 00:23:26.508 Commands Supported & Effects Log Page: Not Supported 00:23:26.508 Feature Identifiers & Effects Log Page:May Support 00:23:26.508 NVMe-MI Commands & Effects Log Page: May Support 00:23:26.508 Data Area 4 for Telemetry Log: Not Supported 00:23:26.508 Error Log Page Entries Supported: 128 00:23:26.508 Keep Alive: Not Supported 00:23:26.508 00:23:26.508 NVM Command Set Attributes 00:23:26.508 ========================== 00:23:26.508 Submission Queue Entry Size 00:23:26.508 Max: 1 00:23:26.508 Min: 1 00:23:26.508 Completion Queue Entry Size 00:23:26.508 Max: 1 00:23:26.508 Min: 1 00:23:26.508 Number of Namespaces: 0 00:23:26.508 Compare Command: Not Supported 00:23:26.508 Write Uncorrectable Command: Not Supported 00:23:26.508 Dataset Management Command: Not Supported 00:23:26.508 Write Zeroes Command: Not Supported 00:23:26.508 Set Features Save Field: Not Supported 00:23:26.508 Reservations: Not Supported 00:23:26.508 Timestamp: Not Supported 00:23:26.508 Copy: Not Supported 00:23:26.508 Volatile Write Cache: Not Present 00:23:26.508 Atomic Write Unit (Normal): 1 00:23:26.508 Atomic Write Unit (PFail): 1 00:23:26.508 Atomic Compare & Write Unit: 1 00:23:26.508 Fused Compare & Write: Supported 00:23:26.508 Scatter-Gather List 00:23:26.508 SGL Command Set: Supported 00:23:26.508 SGL Keyed: Supported 00:23:26.508 SGL Bit Bucket Descriptor: Not Supported 00:23:26.508 SGL Metadata Pointer: Not Supported 00:23:26.508 Oversized SGL: Not Supported 00:23:26.508 SGL Metadata Address: Not Supported 00:23:26.508 SGL Offset: Supported 00:23:26.508 Transport SGL Data Block: Not Supported 00:23:26.508 Replay Protected Memory Block: Not Supported 00:23:26.508 00:23:26.508 Firmware Slot Information 00:23:26.508 ========================= 00:23:26.508 Active slot: 0 00:23:26.508 00:23:26.508 00:23:26.508 Error Log 00:23:26.508 ========= 00:23:26.508 00:23:26.508 Active Namespaces 00:23:26.508 ================= 00:23:26.508 Discovery Log Page 00:23:26.508 ================== 00:23:26.508 Generation Counter: 2 00:23:26.508 Number of Records: 2 00:23:26.508 Record Format: 0 00:23:26.508 00:23:26.508 Discovery Log Entry 0 00:23:26.508 ---------------------- 00:23:26.508 Transport Type: 3 (TCP) 00:23:26.508 Address Family: 1 (IPv4) 00:23:26.508 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:26.508 Entry Flags: 00:23:26.508 Duplicate Returned Information: 1 00:23:26.508 Explicit Persistent Connection Support for Discovery: 1 00:23:26.508 Transport Requirements: 00:23:26.508 Secure Channel: Not Required 00:23:26.508 Port ID: 0 (0x0000) 00:23:26.508 Controller ID: 65535 (0xffff) 00:23:26.508 Admin Max SQ Size: 128 00:23:26.508 Transport Service Identifier: 4420 00:23:26.508 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:26.508 Transport Address: 10.0.0.2 00:23:26.508 Discovery Log Entry 1 00:23:26.508 ---------------------- 00:23:26.508 Transport Type: 3 (TCP) 00:23:26.508 Address Family: 1 (IPv4) 00:23:26.508 Subsystem Type: 2 (NVM Subsystem) 00:23:26.508 Entry Flags: 00:23:26.508 Duplicate Returned Information: 0 00:23:26.508 Explicit Persistent Connection Support for Discovery: 0 00:23:26.508 Transport Requirements: 00:23:26.508 Secure Channel: Not Required 00:23:26.508 Port ID: 0 (0x0000) 00:23:26.508 Controller ID: 65535 (0xffff) 00:23:26.508 Admin Max SQ Size: 128 00:23:26.508 Transport Service Identifier: 4420 00:23:26.508 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:26.508 Transport Address: 10.0.0.2 [2024-11-26 19:24:49.489916] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:26.508 [2024-11-26 19:24:49.489927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da100) on tqpair=0x2178690 00:23:26.508 [2024-11-26 19:24:49.489933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.508 [2024-11-26 19:24:49.489937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da280) on tqpair=0x2178690 00:23:26.508 [2024-11-26 19:24:49.489941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.508 [2024-11-26 19:24:49.489947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da400) on tqpair=0x2178690 00:23:26.508 [2024-11-26 19:24:49.489951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.508 [2024-11-26 19:24:49.489955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.508 [2024-11-26 19:24:49.489959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.508 [2024-11-26 19:24:49.489967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.508 [2024-11-26 19:24:49.489970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.508 [2024-11-26 19:24:49.489973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.508 [2024-11-26 19:24:49.489979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.508 [2024-11-26 19:24:49.489993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.508 [2024-11-26 19:24:49.490053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.508 [2024-11-26 19:24:49.490059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.508 [2024-11-26 19:24:49.490062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.508 [2024-11-26 19:24:49.490065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.508 [2024-11-26 19:24:49.490071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.508 [2024-11-26 19:24:49.490075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.508 [2024-11-26 19:24:49.490078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.508 [2024-11-26 19:24:49.490083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.508 [2024-11-26 19:24:49.490095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.508 [2024-11-26 19:24:49.490171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.508 [2024-11-26 19:24:49.490177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.508 [2024-11-26 19:24:49.490180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.508 [2024-11-26 19:24:49.490183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.508 [2024-11-26 19:24:49.490187] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:26.508 [2024-11-26 19:24:49.490192] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:26.508 [2024-11-26 19:24:49.490200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.508 [2024-11-26 19:24:49.490203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.508 [2024-11-26 19:24:49.490207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.508 [2024-11-26 19:24:49.490212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.490222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.490290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.490295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.490298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.490310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.490324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.490333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.490406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.490412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.490415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.490426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.490438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.490447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.490507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.490513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.490516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.490528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.490540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.490550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.490615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.490621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.490624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.490635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.490647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.490656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.490734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.490740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.490743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.490754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.490766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.490777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.490850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.490855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.490858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.490869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.490881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.490890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.490949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.490954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.490957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.490968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.490975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.490980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.490990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.491083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.491089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.491092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.491095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.491103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.491107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.491110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.491115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.491124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.509 [2024-11-26 19:24:49.491186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.509 [2024-11-26 19:24:49.491192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.509 [2024-11-26 19:24:49.491194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.491198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.509 [2024-11-26 19:24:49.491205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.491209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.509 [2024-11-26 19:24:49.491212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.509 [2024-11-26 19:24:49.491217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.509 [2024-11-26 19:24:49.491228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.510 [2024-11-26 19:24:49.491304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.510 [2024-11-26 19:24:49.491310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.510 [2024-11-26 19:24:49.491313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.510 [2024-11-26 19:24:49.491324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.510 [2024-11-26 19:24:49.491336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.510 [2024-11-26 19:24:49.491344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.510 [2024-11-26 19:24:49.491401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.510 [2024-11-26 19:24:49.491407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.510 [2024-11-26 19:24:49.491410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.510 [2024-11-26 19:24:49.491421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.510 [2024-11-26 19:24:49.491433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.510 [2024-11-26 19:24:49.491442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.510 [2024-11-26 19:24:49.491507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.510 [2024-11-26 19:24:49.491512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.510 [2024-11-26 19:24:49.491515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.510 [2024-11-26 19:24:49.491526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.510 [2024-11-26 19:24:49.491538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.510 [2024-11-26 19:24:49.491547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.510 [2024-11-26 19:24:49.491621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.510 [2024-11-26 19:24:49.491627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.510 [2024-11-26 19:24:49.491630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.510 [2024-11-26 19:24:49.491641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.491647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.510 [2024-11-26 19:24:49.491653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.510 [2024-11-26 19:24:49.491662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.510 [2024-11-26 19:24:49.495676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.510 [2024-11-26 19:24:49.495683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.510 [2024-11-26 19:24:49.495686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.495689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.510 [2024-11-26 19:24:49.495699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.495703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.495706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2178690) 00:23:26.510 [2024-11-26 19:24:49.495711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.510 [2024-11-26 19:24:49.495722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21da580, cid 3, qid 0 00:23:26.510 [2024-11-26 19:24:49.495878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.510 [2024-11-26 19:24:49.495883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.510 [2024-11-26 19:24:49.495886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.495889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21da580) on tqpair=0x2178690 00:23:26.510 [2024-11-26 19:24:49.495896] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:23:26.510 00:23:26.510 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:26.510 [2024-11-26 19:24:49.533904] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:23:26.510 [2024-11-26 19:24:49.533948] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827378 ] 00:23:26.510 [2024-11-26 19:24:49.573857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:26.510 [2024-11-26 19:24:49.573897] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:26.510 [2024-11-26 19:24:49.573901] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:26.510 [2024-11-26 19:24:49.573914] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:26.510 [2024-11-26 19:24:49.573921] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:26.510 [2024-11-26 19:24:49.577848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:26.510 [2024-11-26 19:24:49.577875] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2356690 0 00:23:26.510 [2024-11-26 19:24:49.585682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:26.510 [2024-11-26 19:24:49.585695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:26.510 [2024-11-26 19:24:49.585698] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:26.510 [2024-11-26 19:24:49.585701] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:26.510 [2024-11-26 19:24:49.585728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.585733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.585737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.510 [2024-11-26 19:24:49.585749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:26.510 [2024-11-26 19:24:49.585765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.510 [2024-11-26 19:24:49.593680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.510 [2024-11-26 19:24:49.593688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.510 [2024-11-26 19:24:49.593691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.593695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.510 [2024-11-26 19:24:49.593705] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:26.510 [2024-11-26 19:24:49.593711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:26.510 [2024-11-26 19:24:49.593716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:26.510 [2024-11-26 19:24:49.593727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.593731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.510 [2024-11-26 19:24:49.593734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.510 [2024-11-26 19:24:49.593741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.510 [2024-11-26 19:24:49.593754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.510 [2024-11-26 19:24:49.593892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.510 [2024-11-26 19:24:49.593898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.511 [2024-11-26 19:24:49.593901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.593905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.511 [2024-11-26 19:24:49.593911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:26.511 [2024-11-26 19:24:49.593918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:26.511 [2024-11-26 19:24:49.593924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.593927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.593930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.511 [2024-11-26 19:24:49.593935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.511 [2024-11-26 19:24:49.593946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.511 [2024-11-26 19:24:49.594011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.511 [2024-11-26 19:24:49.594016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.511 [2024-11-26 19:24:49.594019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.511 [2024-11-26 19:24:49.594027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:26.511 [2024-11-26 19:24:49.594033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:26.511 [2024-11-26 19:24:49.594039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.511 [2024-11-26 19:24:49.594051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.511 [2024-11-26 19:24:49.594063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.511 [2024-11-26 19:24:49.594119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.511 [2024-11-26 19:24:49.594125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.511 [2024-11-26 19:24:49.594128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.511 [2024-11-26 19:24:49.594135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:26.511 [2024-11-26 19:24:49.594143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.511 [2024-11-26 19:24:49.594156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.511 [2024-11-26 19:24:49.594164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.511 [2024-11-26 19:24:49.594229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.511 [2024-11-26 19:24:49.594234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.511 [2024-11-26 19:24:49.594237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.511 [2024-11-26 19:24:49.594244] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:26.511 [2024-11-26 19:24:49.594248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:26.511 [2024-11-26 19:24:49.594255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:26.511 [2024-11-26 19:24:49.594362] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:26.511 [2024-11-26 19:24:49.594366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:26.511 [2024-11-26 19:24:49.594372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.511 [2024-11-26 19:24:49.594384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.511 [2024-11-26 19:24:49.594393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.511 [2024-11-26 19:24:49.594454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.511 [2024-11-26 19:24:49.594459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.511 [2024-11-26 19:24:49.594462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.511 [2024-11-26 19:24:49.594469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:26.511 [2024-11-26 19:24:49.594477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.511 [2024-11-26 19:24:49.594493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.511 [2024-11-26 19:24:49.594502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.511 [2024-11-26 19:24:49.594577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.511 [2024-11-26 19:24:49.594583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.511 [2024-11-26 19:24:49.594586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.511 [2024-11-26 19:24:49.594593] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:26.511 [2024-11-26 19:24:49.594597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:26.511 [2024-11-26 19:24:49.594604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:26.511 [2024-11-26 19:24:49.594610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:26.511 [2024-11-26 19:24:49.594618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.511 [2024-11-26 19:24:49.594627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.511 [2024-11-26 19:24:49.594636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.511 [2024-11-26 19:24:49.594731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.511 [2024-11-26 19:24:49.594737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.511 [2024-11-26 19:24:49.594740] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594743] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2356690): datao=0, datal=4096, cccid=0 00:23:26.511 [2024-11-26 19:24:49.594747] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8100) on tqpair(0x2356690): expected_datao=0, payload_size=4096 00:23:26.511 [2024-11-26 19:24:49.594751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594757] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594761] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.511 [2024-11-26 19:24:49.594786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.511 [2024-11-26 19:24:49.594789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.511 [2024-11-26 19:24:49.594792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.511 [2024-11-26 19:24:49.594798] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:26.511 [2024-11-26 19:24:49.594803] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:26.511 [2024-11-26 19:24:49.594807] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:26.512 [2024-11-26 19:24:49.594810] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:26.512 [2024-11-26 19:24:49.594814] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:26.512 [2024-11-26 19:24:49.594818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.594826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.594834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.594846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.512 [2024-11-26 19:24:49.594856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.512 [2024-11-26 19:24:49.594926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.512 [2024-11-26 19:24:49.594931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.512 [2024-11-26 19:24:49.594933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.512 [2024-11-26 19:24:49.594942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.594953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.512 [2024-11-26 19:24:49.594958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.594969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.512 [2024-11-26 19:24:49.594974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.594985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.512 [2024-11-26 19:24:49.594990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.594996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.595001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.512 [2024-11-26 19:24:49.595005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.595029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.512 [2024-11-26 19:24:49.595040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 0, qid 0 00:23:26.512 [2024-11-26 19:24:49.595044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8280, cid 1, qid 0 00:23:26.512 [2024-11-26 19:24:49.595048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8400, cid 2, qid 0 00:23:26.512 [2024-11-26 19:24:49.595054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8580, cid 3, qid 0 00:23:26.512 [2024-11-26 19:24:49.595058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8700, cid 4, qid 0 00:23:26.512 [2024-11-26 19:24:49.595153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.512 [2024-11-26 19:24:49.595159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.512 [2024-11-26 19:24:49.595162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8700) on tqpair=0x2356690 00:23:26.512 [2024-11-26 19:24:49.595169] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:26.512 [2024-11-26 19:24:49.595173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.595205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.512 [2024-11-26 19:24:49.595215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8700, cid 4, qid 0 00:23:26.512 [2024-11-26 19:24:49.595276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.512 [2024-11-26 19:24:49.595281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.512 [2024-11-26 19:24:49.595284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8700) on tqpair=0x2356690 00:23:26.512 [2024-11-26 19:24:49.595339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.595363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.512 [2024-11-26 19:24:49.595372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8700, cid 4, qid 0 00:23:26.512 [2024-11-26 19:24:49.595447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.512 [2024-11-26 19:24:49.595452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.512 [2024-11-26 19:24:49.595455] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595459] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2356690): datao=0, datal=4096, cccid=4 00:23:26.512 [2024-11-26 19:24:49.595462] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8700) on tqpair(0x2356690): expected_datao=0, payload_size=4096 00:23:26.512 [2024-11-26 19:24:49.595466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595472] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595475] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.512 [2024-11-26 19:24:49.595497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.512 [2024-11-26 19:24:49.595499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8700) on tqpair=0x2356690 00:23:26.512 [2024-11-26 19:24:49.595512] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:26.512 [2024-11-26 19:24:49.595523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:26.512 [2024-11-26 19:24:49.595538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.512 [2024-11-26 19:24:49.595541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2356690) 00:23:26.512 [2024-11-26 19:24:49.595546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.512 [2024-11-26 19:24:49.595556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8700, cid 4, qid 0 00:23:26.513 [2024-11-26 19:24:49.595653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.513 [2024-11-26 19:24:49.595658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.513 [2024-11-26 19:24:49.595661] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595664] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2356690): datao=0, datal=4096, cccid=4 00:23:26.513 [2024-11-26 19:24:49.595668] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8700) on tqpair(0x2356690): expected_datao=0, payload_size=4096 00:23:26.513 [2024-11-26 19:24:49.595677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595683] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595686] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.513 [2024-11-26 19:24:49.595705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.513 [2024-11-26 19:24:49.595708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8700) on tqpair=0x2356690 00:23:26.513 [2024-11-26 19:24:49.595720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.595743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.595754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8700, cid 4, qid 0 00:23:26.513 [2024-11-26 19:24:49.595825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.513 [2024-11-26 19:24:49.595831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.513 [2024-11-26 19:24:49.595834] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595837] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2356690): datao=0, datal=4096, cccid=4 00:23:26.513 [2024-11-26 19:24:49.595841] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8700) on tqpair(0x2356690): expected_datao=0, payload_size=4096 00:23:26.513 [2024-11-26 19:24:49.595847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595853] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595856] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.513 [2024-11-26 19:24:49.595875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.513 [2024-11-26 19:24:49.595878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8700) on tqpair=0x2356690 00:23:26.513 [2024-11-26 19:24:49.595890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595923] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:26.513 [2024-11-26 19:24:49.595927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:26.513 [2024-11-26 19:24:49.595931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:26.513 [2024-11-26 19:24:49.595943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.595952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.595958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.595964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.595969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.513 [2024-11-26 19:24:49.595980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8700, cid 4, qid 0 00:23:26.513 [2024-11-26 19:24:49.595985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8880, cid 5, qid 0 00:23:26.513 [2024-11-26 19:24:49.596067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.513 [2024-11-26 19:24:49.596072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.513 [2024-11-26 19:24:49.596075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8700) on tqpair=0x2356690 00:23:26.513 [2024-11-26 19:24:49.596084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.513 [2024-11-26 19:24:49.596088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.513 [2024-11-26 19:24:49.596091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8880) on tqpair=0x2356690 00:23:26.513 [2024-11-26 19:24:49.596105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.596113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.596123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8880, cid 5, qid 0 00:23:26.513 [2024-11-26 19:24:49.596190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.513 [2024-11-26 19:24:49.596195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.513 [2024-11-26 19:24:49.596198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8880) on tqpair=0x2356690 00:23:26.513 [2024-11-26 19:24:49.596209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.596217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.596226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8880, cid 5, qid 0 00:23:26.513 [2024-11-26 19:24:49.596294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.513 [2024-11-26 19:24:49.596300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.513 [2024-11-26 19:24:49.596302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8880) on tqpair=0x2356690 00:23:26.513 [2024-11-26 19:24:49.596314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.596323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.596332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8880, cid 5, qid 0 00:23:26.513 [2024-11-26 19:24:49.596396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.513 [2024-11-26 19:24:49.596401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.513 [2024-11-26 19:24:49.596404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8880) on tqpair=0x2356690 00:23:26.513 [2024-11-26 19:24:49.596421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.596430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.596436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.596445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.596450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.596459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.596466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.513 [2024-11-26 19:24:49.596469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2356690) 00:23:26.513 [2024-11-26 19:24:49.596475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.513 [2024-11-26 19:24:49.596485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8880, cid 5, qid 0 00:23:26.513 [2024-11-26 19:24:49.596489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8700, cid 4, qid 0 00:23:26.513 [2024-11-26 19:24:49.596493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8a00, cid 6, qid 0 00:23:26.514 [2024-11-26 19:24:49.596497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8b80, cid 7, qid 0 00:23:26.514 [2024-11-26 19:24:49.596630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.514 [2024-11-26 19:24:49.596636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.514 [2024-11-26 19:24:49.596639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596642] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2356690): datao=0, datal=8192, cccid=5 00:23:26.514 [2024-11-26 19:24:49.596646] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8880) on tqpair(0x2356690): expected_datao=0, payload_size=8192 00:23:26.514 [2024-11-26 19:24:49.596649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596663] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596667] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.514 [2024-11-26 19:24:49.596681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.514 [2024-11-26 19:24:49.596684] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596687] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2356690): datao=0, datal=512, cccid=4 00:23:26.514 [2024-11-26 19:24:49.596691] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8700) on tqpair(0x2356690): expected_datao=0, payload_size=512 00:23:26.514 [2024-11-26 19:24:49.596694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596700] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596703] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.514 [2024-11-26 19:24:49.596712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.514 [2024-11-26 19:24:49.596715] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596718] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2356690): datao=0, datal=512, cccid=6 00:23:26.514 [2024-11-26 19:24:49.596722] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8a00) on tqpair(0x2356690): expected_datao=0, payload_size=512 00:23:26.514 [2024-11-26 19:24:49.596725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596730] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596733] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.514 [2024-11-26 19:24:49.596743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.514 [2024-11-26 19:24:49.596745] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596748] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2356690): datao=0, datal=4096, cccid=7 00:23:26.514 [2024-11-26 19:24:49.596752] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8b80) on tqpair(0x2356690): expected_datao=0, payload_size=4096 00:23:26.514 [2024-11-26 19:24:49.596757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596763] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596766] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.514 [2024-11-26 19:24:49.596778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.514 [2024-11-26 19:24:49.596780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8880) on tqpair=0x2356690 00:23:26.514 [2024-11-26 19:24:49.596793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.514 [2024-11-26 19:24:49.596798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.514 [2024-11-26 19:24:49.596801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8700) on tqpair=0x2356690 00:23:26.514 [2024-11-26 19:24:49.596812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.514 [2024-11-26 19:24:49.596817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.514 [2024-11-26 19:24:49.596820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8a00) on tqpair=0x2356690 00:23:26.514 [2024-11-26 19:24:49.596829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.514 [2024-11-26 19:24:49.596833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.514 [2024-11-26 19:24:49.596836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.514 [2024-11-26 19:24:49.596839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8b80) on tqpair=0x2356690 00:23:26.514 ===================================================== 00:23:26.514 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.514 ===================================================== 00:23:26.514 Controller Capabilities/Features 00:23:26.514 ================================ 00:23:26.514 Vendor ID: 8086 00:23:26.514 Subsystem Vendor ID: 8086 00:23:26.514 Serial Number: SPDK00000000000001 00:23:26.514 Model Number: SPDK bdev Controller 00:23:26.514 Firmware Version: 25.01 00:23:26.514 Recommended Arb Burst: 6 00:23:26.514 IEEE OUI Identifier: e4 d2 5c 00:23:26.514 Multi-path I/O 00:23:26.514 May have multiple subsystem ports: Yes 00:23:26.514 May have multiple controllers: Yes 00:23:26.514 Associated with SR-IOV VF: No 00:23:26.514 Max Data Transfer Size: 131072 00:23:26.514 Max Number of Namespaces: 32 00:23:26.514 Max Number of I/O Queues: 127 00:23:26.514 NVMe Specification Version (VS): 1.3 00:23:26.514 NVMe Specification Version (Identify): 1.3 00:23:26.514 Maximum Queue Entries: 128 00:23:26.514 Contiguous Queues Required: Yes 00:23:26.514 Arbitration Mechanisms Supported 00:23:26.514 Weighted Round Robin: Not Supported 00:23:26.514 Vendor Specific: Not Supported 00:23:26.514 Reset Timeout: 15000 ms 00:23:26.514 Doorbell Stride: 4 bytes 00:23:26.514 NVM Subsystem Reset: Not Supported 00:23:26.514 Command Sets Supported 00:23:26.514 NVM Command Set: Supported 00:23:26.514 Boot Partition: Not Supported 00:23:26.514 Memory Page Size Minimum: 4096 bytes 00:23:26.514 Memory Page Size Maximum: 4096 bytes 00:23:26.514 Persistent Memory Region: Not Supported 00:23:26.514 Optional Asynchronous Events Supported 00:23:26.514 Namespace Attribute Notices: Supported 00:23:26.514 Firmware Activation Notices: Not Supported 00:23:26.514 ANA Change Notices: Not Supported 00:23:26.514 PLE Aggregate Log Change Notices: Not Supported 00:23:26.514 LBA Status Info Alert Notices: Not Supported 00:23:26.514 EGE Aggregate Log Change Notices: Not Supported 00:23:26.514 Normal NVM Subsystem Shutdown event: Not Supported 00:23:26.514 Zone Descriptor Change Notices: Not Supported 00:23:26.514 Discovery Log Change Notices: Not Supported 00:23:26.514 Controller Attributes 00:23:26.514 128-bit Host Identifier: Supported 00:23:26.514 Non-Operational Permissive Mode: Not Supported 00:23:26.514 NVM Sets: Not Supported 00:23:26.514 Read Recovery Levels: Not Supported 00:23:26.514 Endurance Groups: Not Supported 00:23:26.514 Predictable Latency Mode: Not Supported 00:23:26.514 Traffic Based Keep ALive: Not Supported 00:23:26.514 Namespace Granularity: Not Supported 00:23:26.514 SQ Associations: Not Supported 00:23:26.514 UUID List: Not Supported 00:23:26.514 Multi-Domain Subsystem: Not Supported 00:23:26.514 Fixed Capacity Management: Not Supported 00:23:26.514 Variable Capacity Management: Not Supported 00:23:26.514 Delete Endurance Group: Not Supported 00:23:26.514 Delete NVM Set: Not Supported 00:23:26.514 Extended LBA Formats Supported: Not Supported 00:23:26.514 Flexible Data Placement Supported: Not Supported 00:23:26.514 00:23:26.514 Controller Memory Buffer Support 00:23:26.514 ================================ 00:23:26.514 Supported: No 00:23:26.514 00:23:26.514 Persistent Memory Region Support 00:23:26.514 ================================ 00:23:26.514 Supported: No 00:23:26.514 00:23:26.514 Admin Command Set Attributes 00:23:26.514 ============================ 00:23:26.514 Security Send/Receive: Not Supported 00:23:26.514 Format NVM: Not Supported 00:23:26.514 Firmware Activate/Download: Not Supported 00:23:26.514 Namespace Management: Not Supported 00:23:26.514 Device Self-Test: Not Supported 00:23:26.514 Directives: Not Supported 00:23:26.515 NVMe-MI: Not Supported 00:23:26.515 Virtualization Management: Not Supported 00:23:26.515 Doorbell Buffer Config: Not Supported 00:23:26.515 Get LBA Status Capability: Not Supported 00:23:26.515 Command & Feature Lockdown Capability: Not Supported 00:23:26.515 Abort Command Limit: 4 00:23:26.515 Async Event Request Limit: 4 00:23:26.515 Number of Firmware Slots: N/A 00:23:26.515 Firmware Slot 1 Read-Only: N/A 00:23:26.515 Firmware Activation Without Reset: N/A 00:23:26.515 Multiple Update Detection Support: N/A 00:23:26.515 Firmware Update Granularity: No Information Provided 00:23:26.515 Per-Namespace SMART Log: No 00:23:26.515 Asymmetric Namespace Access Log Page: Not Supported 00:23:26.515 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:26.515 Command Effects Log Page: Supported 00:23:26.515 Get Log Page Extended Data: Supported 00:23:26.515 Telemetry Log Pages: Not Supported 00:23:26.515 Persistent Event Log Pages: Not Supported 00:23:26.515 Supported Log Pages Log Page: May Support 00:23:26.515 Commands Supported & Effects Log Page: Not Supported 00:23:26.515 Feature Identifiers & Effects Log Page:May Support 00:23:26.515 NVMe-MI Commands & Effects Log Page: May Support 00:23:26.515 Data Area 4 for Telemetry Log: Not Supported 00:23:26.515 Error Log Page Entries Supported: 128 00:23:26.515 Keep Alive: Supported 00:23:26.515 Keep Alive Granularity: 10000 ms 00:23:26.515 00:23:26.515 NVM Command Set Attributes 00:23:26.515 ========================== 00:23:26.515 Submission Queue Entry Size 00:23:26.515 Max: 64 00:23:26.515 Min: 64 00:23:26.515 Completion Queue Entry Size 00:23:26.515 Max: 16 00:23:26.515 Min: 16 00:23:26.515 Number of Namespaces: 32 00:23:26.515 Compare Command: Supported 00:23:26.515 Write Uncorrectable Command: Not Supported 00:23:26.515 Dataset Management Command: Supported 00:23:26.515 Write Zeroes Command: Supported 00:23:26.515 Set Features Save Field: Not Supported 00:23:26.515 Reservations: Supported 00:23:26.515 Timestamp: Not Supported 00:23:26.515 Copy: Supported 00:23:26.515 Volatile Write Cache: Present 00:23:26.515 Atomic Write Unit (Normal): 1 00:23:26.515 Atomic Write Unit (PFail): 1 00:23:26.515 Atomic Compare & Write Unit: 1 00:23:26.515 Fused Compare & Write: Supported 00:23:26.515 Scatter-Gather List 00:23:26.515 SGL Command Set: Supported 00:23:26.515 SGL Keyed: Supported 00:23:26.515 SGL Bit Bucket Descriptor: Not Supported 00:23:26.515 SGL Metadata Pointer: Not Supported 00:23:26.515 Oversized SGL: Not Supported 00:23:26.515 SGL Metadata Address: Not Supported 00:23:26.515 SGL Offset: Supported 00:23:26.515 Transport SGL Data Block: Not Supported 00:23:26.515 Replay Protected Memory Block: Not Supported 00:23:26.515 00:23:26.515 Firmware Slot Information 00:23:26.515 ========================= 00:23:26.515 Active slot: 1 00:23:26.515 Slot 1 Firmware Revision: 25.01 00:23:26.515 00:23:26.515 00:23:26.515 Commands Supported and Effects 00:23:26.515 ============================== 00:23:26.515 Admin Commands 00:23:26.515 -------------- 00:23:26.515 Get Log Page (02h): Supported 00:23:26.515 Identify (06h): Supported 00:23:26.515 Abort (08h): Supported 00:23:26.515 Set Features (09h): Supported 00:23:26.515 Get Features (0Ah): Supported 00:23:26.515 Asynchronous Event Request (0Ch): Supported 00:23:26.515 Keep Alive (18h): Supported 00:23:26.515 I/O Commands 00:23:26.515 ------------ 00:23:26.515 Flush (00h): Supported LBA-Change 00:23:26.515 Write (01h): Supported LBA-Change 00:23:26.515 Read (02h): Supported 00:23:26.515 Compare (05h): Supported 00:23:26.515 Write Zeroes (08h): Supported LBA-Change 00:23:26.515 Dataset Management (09h): Supported LBA-Change 00:23:26.515 Copy (19h): Supported LBA-Change 00:23:26.515 00:23:26.515 Error Log 00:23:26.515 ========= 00:23:26.515 00:23:26.515 Arbitration 00:23:26.515 =========== 00:23:26.515 Arbitration Burst: 1 00:23:26.515 00:23:26.515 Power Management 00:23:26.515 ================ 00:23:26.515 Number of Power States: 1 00:23:26.515 Current Power State: Power State #0 00:23:26.515 Power State #0: 00:23:26.515 Max Power: 0.00 W 00:23:26.515 Non-Operational State: Operational 00:23:26.515 Entry Latency: Not Reported 00:23:26.515 Exit Latency: Not Reported 00:23:26.515 Relative Read Throughput: 0 00:23:26.515 Relative Read Latency: 0 00:23:26.515 Relative Write Throughput: 0 00:23:26.515 Relative Write Latency: 0 00:23:26.515 Idle Power: Not Reported 00:23:26.515 Active Power: Not Reported 00:23:26.515 Non-Operational Permissive Mode: Not Supported 00:23:26.515 00:23:26.515 Health Information 00:23:26.515 ================== 00:23:26.515 Critical Warnings: 00:23:26.515 Available Spare Space: OK 00:23:26.515 Temperature: OK 00:23:26.515 Device Reliability: OK 00:23:26.515 Read Only: No 00:23:26.515 Volatile Memory Backup: OK 00:23:26.515 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:26.516 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:26.516 Available Spare: 0% 00:23:26.516 Available Spare Threshold: 0% 00:23:26.516 Life Percentage Used:[2024-11-26 19:24:49.596917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.596921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2356690) 00:23:26.516 [2024-11-26 19:24:49.596927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.516 [2024-11-26 19:24:49.596938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8b80, cid 7, qid 0 00:23:26.516 [2024-11-26 19:24:49.597010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.516 [2024-11-26 19:24:49.597016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.516 [2024-11-26 19:24:49.597019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8b80) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597052] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:26.516 [2024-11-26 19:24:49.597061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.516 [2024-11-26 19:24:49.597071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8280) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.516 [2024-11-26 19:24:49.597079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8400) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.516 [2024-11-26 19:24:49.597087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8580) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.516 [2024-11-26 19:24:49.597099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2356690) 00:23:26.516 [2024-11-26 19:24:49.597111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.516 [2024-11-26 19:24:49.597122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8580, cid 3, qid 0 00:23:26.516 [2024-11-26 19:24:49.597184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.516 [2024-11-26 19:24:49.597189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.516 [2024-11-26 19:24:49.597192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8580) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2356690) 00:23:26.516 [2024-11-26 19:24:49.597213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.516 [2024-11-26 19:24:49.597224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8580, cid 3, qid 0 00:23:26.516 [2024-11-26 19:24:49.597305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.516 [2024-11-26 19:24:49.597310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.516 [2024-11-26 19:24:49.597313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8580) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597320] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:26.516 [2024-11-26 19:24:49.597324] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:26.516 [2024-11-26 19:24:49.597332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2356690) 00:23:26.516 [2024-11-26 19:24:49.597344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.516 [2024-11-26 19:24:49.597354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8580, cid 3, qid 0 00:23:26.516 [2024-11-26 19:24:49.597417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.516 [2024-11-26 19:24:49.597422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.516 [2024-11-26 19:24:49.597425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8580) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2356690) 00:23:26.516 [2024-11-26 19:24:49.597449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.516 [2024-11-26 19:24:49.597457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8580, cid 3, qid 0 00:23:26.516 [2024-11-26 19:24:49.597525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.516 [2024-11-26 19:24:49.597530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.516 [2024-11-26 19:24:49.597535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8580) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2356690) 00:23:26.516 [2024-11-26 19:24:49.597558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.516 [2024-11-26 19:24:49.597568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8580, cid 3, qid 0 00:23:26.516 [2024-11-26 19:24:49.597630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.516 [2024-11-26 19:24:49.597635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.516 [2024-11-26 19:24:49.597638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8580) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.597649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.597656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2356690) 00:23:26.516 [2024-11-26 19:24:49.597661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.516 [2024-11-26 19:24:49.601674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8580, cid 3, qid 0 00:23:26.516 [2024-11-26 19:24:49.601685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.516 [2024-11-26 19:24:49.601690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.516 [2024-11-26 19:24:49.601693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.601697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8580) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.601706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.601710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.601713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2356690) 00:23:26.516 [2024-11-26 19:24:49.601719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.516 [2024-11-26 19:24:49.601730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8580, cid 3, qid 0 00:23:26.516 [2024-11-26 19:24:49.601865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.516 [2024-11-26 19:24:49.601871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.516 [2024-11-26 19:24:49.601874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.516 [2024-11-26 19:24:49.601877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b8580) on tqpair=0x2356690 00:23:26.516 [2024-11-26 19:24:49.601884] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:26.516 0% 00:23:26.516 Data Units Read: 0 00:23:26.516 Data Units Written: 0 00:23:26.516 Host Read Commands: 0 00:23:26.516 Host Write Commands: 0 00:23:26.516 Controller Busy Time: 0 minutes 00:23:26.516 Power Cycles: 0 00:23:26.516 Power On Hours: 0 hours 00:23:26.516 Unsafe Shutdowns: 0 00:23:26.516 Unrecoverable Media Errors: 0 00:23:26.516 Lifetime Error Log Entries: 0 00:23:26.516 Warning Temperature Time: 0 minutes 00:23:26.516 Critical Temperature Time: 0 minutes 00:23:26.516 00:23:26.516 Number of Queues 00:23:26.516 ================ 00:23:26.516 Number of I/O Submission Queues: 127 00:23:26.517 Number of I/O Completion Queues: 127 00:23:26.517 00:23:26.517 Active Namespaces 00:23:26.517 ================= 00:23:26.517 Namespace ID:1 00:23:26.517 Error Recovery Timeout: Unlimited 00:23:26.517 Command Set Identifier: NVM (00h) 00:23:26.517 Deallocate: Supported 00:23:26.517 Deallocated/Unwritten Error: Not Supported 00:23:26.517 Deallocated Read Value: Unknown 00:23:26.517 Deallocate in Write Zeroes: Not Supported 00:23:26.517 Deallocated Guard Field: 0xFFFF 00:23:26.517 Flush: Supported 00:23:26.517 Reservation: Supported 00:23:26.517 Namespace Sharing Capabilities: Multiple Controllers 00:23:26.517 Size (in LBAs): 131072 (0GiB) 00:23:26.517 Capacity (in LBAs): 131072 (0GiB) 00:23:26.517 Utilization (in LBAs): 131072 (0GiB) 00:23:26.517 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:26.517 EUI64: ABCDEF0123456789 00:23:26.517 UUID: faa0ea06-17d9-47b1-ae05-b20eb09cddd4 00:23:26.517 Thin Provisioning: Not Supported 00:23:26.517 Per-NS Atomic Units: Yes 00:23:26.517 Atomic Boundary Size (Normal): 0 00:23:26.517 Atomic Boundary Size (PFail): 0 00:23:26.517 Atomic Boundary Offset: 0 00:23:26.517 Maximum Single Source Range Length: 65535 00:23:26.517 Maximum Copy Length: 65535 00:23:26.517 Maximum Source Range Count: 1 00:23:26.517 NGUID/EUI64 Never Reused: No 00:23:26.517 Namespace Write Protected: No 00:23:26.517 Number of LBA Formats: 1 00:23:26.517 Current LBA Format: LBA Format #00 00:23:26.517 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:26.517 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.775 rmmod nvme_tcp 00:23:26.775 rmmod nvme_fabrics 00:23:26.775 rmmod nvme_keyring 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:26.775 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3827125 ']' 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3827125 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3827125 ']' 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3827125 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3827125 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3827125' 00:23:26.776 killing process with pid 3827125 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3827125 00:23:26.776 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3827125 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.035 19:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.940 19:24:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:28.940 00:23:28.940 real 0m9.917s 00:23:28.940 user 0m7.940s 00:23:28.940 sys 0m4.920s 00:23:28.940 19:24:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.940 19:24:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.940 ************************************ 00:23:28.940 END TEST nvmf_identify 00:23:28.940 ************************************ 00:23:28.940 19:24:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:28.940 19:24:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:28.940 19:24:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.940 19:24:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.199 ************************************ 00:23:29.199 START TEST nvmf_perf 00:23:29.199 ************************************ 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:29.199 * Looking for test storage... 00:23:29.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:29.199 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.200 --rc genhtml_branch_coverage=1 00:23:29.200 --rc genhtml_function_coverage=1 00:23:29.200 --rc genhtml_legend=1 00:23:29.200 --rc geninfo_all_blocks=1 00:23:29.200 --rc geninfo_unexecuted_blocks=1 00:23:29.200 00:23:29.200 ' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.200 --rc genhtml_branch_coverage=1 00:23:29.200 --rc genhtml_function_coverage=1 00:23:29.200 --rc genhtml_legend=1 00:23:29.200 --rc geninfo_all_blocks=1 00:23:29.200 --rc geninfo_unexecuted_blocks=1 00:23:29.200 00:23:29.200 ' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.200 --rc genhtml_branch_coverage=1 00:23:29.200 --rc genhtml_function_coverage=1 00:23:29.200 --rc genhtml_legend=1 00:23:29.200 --rc geninfo_all_blocks=1 00:23:29.200 --rc geninfo_unexecuted_blocks=1 00:23:29.200 00:23:29.200 ' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.200 --rc genhtml_branch_coverage=1 00:23:29.200 --rc genhtml_function_coverage=1 00:23:29.200 --rc genhtml_legend=1 00:23:29.200 --rc geninfo_all_blocks=1 00:23:29.200 --rc geninfo_unexecuted_blocks=1 00:23:29.200 00:23:29.200 ' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.200 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.201 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.201 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.201 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.201 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.201 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.201 19:24:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:35.772 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:35.772 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:35.772 Found net devices under 0000:86:00.0: cvl_0_0 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:35.772 Found net devices under 0000:86:00.1: cvl_0_1 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.772 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.773 19:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:23:35.773 00:23:35.773 --- 10.0.0.2 ping statistics --- 00:23:35.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.773 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:35.773 00:23:35.773 --- 10.0.0.1 ping statistics --- 00:23:35.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.773 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3830894 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3830894 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3830894 ']' 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:35.773 [2024-11-26 19:24:58.306466] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:23:35.773 [2024-11-26 19:24:58.306515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.773 [2024-11-26 19:24:58.383105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.773 [2024-11-26 19:24:58.425759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.773 [2024-11-26 19:24:58.425793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.773 [2024-11-26 19:24:58.425800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.773 [2024-11-26 19:24:58.425806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.773 [2024-11-26 19:24:58.425811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.773 [2024-11-26 19:24:58.427185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.773 [2024-11-26 19:24:58.427296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.773 [2024-11-26 19:24:58.427402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.773 [2024-11-26 19:24:58.427402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:35.773 19:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:39.057 19:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:39.058 19:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:39.058 19:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:39.058 19:25:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:39.058 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:39.058 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:39.058 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:39.058 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:39.058 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:39.316 [2024-11-26 19:25:02.206988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.316 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.574 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:39.574 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.574 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:39.574 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:39.832 19:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.090 [2024-11-26 19:25:02.996490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.090 19:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:40.348 19:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:40.348 19:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:40.348 19:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:40.348 19:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:41.722 Initializing NVMe Controllers 00:23:41.722 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:41.722 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:41.722 Initialization complete. Launching workers. 00:23:41.722 ======================================================== 00:23:41.722 Latency(us) 00:23:41.722 Device Information : IOPS MiB/s Average min max 00:23:41.722 PCIE (0000:5e:00.0) NSID 1 from core 0: 99324.55 387.99 321.77 29.94 7195.00 00:23:41.722 ======================================================== 00:23:41.722 Total : 99324.55 387.99 321.77 29.94 7195.00 00:23:41.722 00:23:41.722 19:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.097 Initializing NVMe Controllers 00:23:43.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.097 Initialization complete. Launching workers. 00:23:43.097 ======================================================== 00:23:43.097 Latency(us) 00:23:43.097 Device Information : IOPS MiB/s Average min max 00:23:43.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 254.10 0.99 3963.50 113.31 44802.10 00:23:43.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.78 0.24 16945.43 6982.26 47885.23 00:23:43.097 ======================================================== 00:23:43.097 Total : 315.88 1.23 6502.55 113.31 47885.23 00:23:43.097 00:23:43.097 19:25:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:44.032 Initializing NVMe Controllers 00:23:44.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:44.032 Initialization complete. Launching workers. 00:23:44.032 ======================================================== 00:23:44.032 Latency(us) 00:23:44.032 Device Information : IOPS MiB/s Average min max 00:23:44.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11233.99 43.88 2848.86 516.38 6792.65 00:23:44.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3931.00 15.36 8174.26 5526.58 15961.00 00:23:44.032 ======================================================== 00:23:44.032 Total : 15164.98 59.24 4229.29 516.38 15961.00 00:23:44.032 00:23:44.032 19:25:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:44.032 19:25:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:44.032 19:25:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:47.320 Initializing NVMe Controllers 00:23:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.320 Controller IO queue size 128, less than required. 00:23:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.320 Controller IO queue size 128, less than required. 00:23:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:47.320 Initialization complete. Launching workers. 00:23:47.320 ======================================================== 00:23:47.320 Latency(us) 00:23:47.320 Device Information : IOPS MiB/s Average min max 00:23:47.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1773.26 443.31 73356.92 49846.26 123273.22 00:23:47.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.07 152.02 213940.69 87703.78 326614.30 00:23:47.320 ======================================================== 00:23:47.320 Total : 2381.33 595.33 109255.04 49846.26 326614.30 00:23:47.320 00:23:47.320 19:25:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:47.320 No valid NVMe controllers or AIO or URING devices found 00:23:47.320 Initializing NVMe Controllers 00:23:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.320 Controller IO queue size 128, less than required. 00:23:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.320 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:47.320 Controller IO queue size 128, less than required. 00:23:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.320 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:47.321 WARNING: Some requested NVMe devices were skipped 00:23:47.321 19:25:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:49.853 Initializing NVMe Controllers 00:23:49.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.853 Controller IO queue size 128, less than required. 00:23:49.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.853 Controller IO queue size 128, less than required. 00:23:49.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:49.853 Initialization complete. Launching workers. 00:23:49.853 00:23:49.853 ==================== 00:23:49.853 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:49.853 TCP transport: 00:23:49.853 polls: 11188 00:23:49.853 idle_polls: 7722 00:23:49.853 sock_completions: 3466 00:23:49.853 nvme_completions: 6253 00:23:49.853 submitted_requests: 9382 00:23:49.853 queued_requests: 1 00:23:49.853 00:23:49.853 ==================== 00:23:49.853 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:49.853 TCP transport: 00:23:49.853 polls: 11920 00:23:49.853 idle_polls: 8116 00:23:49.853 sock_completions: 3804 00:23:49.853 nvme_completions: 6477 00:23:49.853 submitted_requests: 9634 00:23:49.853 queued_requests: 1 00:23:49.853 ======================================================== 00:23:49.853 Latency(us) 00:23:49.854 Device Information : IOPS MiB/s Average min max 00:23:49.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1562.82 390.71 82653.74 53571.24 142536.81 00:23:49.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1618.82 404.70 80101.03 44629.23 121259.32 00:23:49.854 ======================================================== 00:23:49.854 Total : 3181.64 795.41 81354.92 44629.23 142536.81 00:23:49.854 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.854 rmmod nvme_tcp 00:23:49.854 rmmod nvme_fabrics 00:23:49.854 rmmod nvme_keyring 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3830894 ']' 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3830894 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3830894 ']' 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3830894 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3830894 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3830894' 00:23:49.854 killing process with pid 3830894 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3830894 00:23:49.854 19:25:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3830894 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.385 19:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.292 00:23:54.292 real 0m24.931s 00:23:54.292 user 1m5.525s 00:23:54.292 sys 0m8.429s 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:54.292 ************************************ 00:23:54.292 END TEST nvmf_perf 00:23:54.292 ************************************ 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.292 ************************************ 00:23:54.292 START TEST nvmf_fio_host 00:23:54.292 ************************************ 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:54.292 * Looking for test storage... 00:23:54.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.292 --rc genhtml_branch_coverage=1 00:23:54.292 --rc genhtml_function_coverage=1 00:23:54.292 --rc genhtml_legend=1 00:23:54.292 --rc geninfo_all_blocks=1 00:23:54.292 --rc geninfo_unexecuted_blocks=1 00:23:54.292 00:23:54.292 ' 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.292 --rc genhtml_branch_coverage=1 00:23:54.292 --rc genhtml_function_coverage=1 00:23:54.292 --rc genhtml_legend=1 00:23:54.292 --rc geninfo_all_blocks=1 00:23:54.292 --rc geninfo_unexecuted_blocks=1 00:23:54.292 00:23:54.292 ' 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.292 --rc genhtml_branch_coverage=1 00:23:54.292 --rc genhtml_function_coverage=1 00:23:54.292 --rc genhtml_legend=1 00:23:54.292 --rc geninfo_all_blocks=1 00:23:54.292 --rc geninfo_unexecuted_blocks=1 00:23:54.292 00:23:54.292 ' 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.292 --rc genhtml_branch_coverage=1 00:23:54.292 --rc genhtml_function_coverage=1 00:23:54.292 --rc genhtml_legend=1 00:23:54.292 --rc geninfo_all_blocks=1 00:23:54.292 --rc geninfo_unexecuted_blocks=1 00:23:54.292 00:23:54.292 ' 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.292 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.293 19:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.864 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:00.865 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:00.865 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:00.865 Found net devices under 0000:86:00.0: cvl_0_0 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:00.865 Found net devices under 0000:86:00.1: cvl_0_1 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.865 19:25:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:00.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:24:00.865 00:24:00.865 --- 10.0.0.2 ping statistics --- 00:24:00.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.865 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:24:00.865 00:24:00.865 --- 10.0.0.1 ping statistics --- 00:24:00.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.865 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3837076 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3837076 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3837076 ']' 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.865 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.866 [2024-11-26 19:25:23.267435] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:24:00.866 [2024-11-26 19:25:23.267477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.866 [2024-11-26 19:25:23.346625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.866 [2024-11-26 19:25:23.389711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.866 [2024-11-26 19:25:23.389750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.866 [2024-11-26 19:25:23.389757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.866 [2024-11-26 19:25:23.389764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.866 [2024-11-26 19:25:23.389769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.866 [2024-11-26 19:25:23.391336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.866 [2024-11-26 19:25:23.391366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.866 [2024-11-26 19:25:23.391494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.866 [2024-11-26 19:25:23.391494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.866 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.866 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:00.866 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:00.866 [2024-11-26 19:25:23.662289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.866 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:00.866 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.866 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.866 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:00.866 Malloc1 00:24:00.866 19:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.124 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:01.383 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.383 [2024-11-26 19:25:24.493930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:01.642 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:01.906 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:01.906 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:01.906 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:01.906 19:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:02.163 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:02.163 fio-3.35 00:24:02.163 Starting 1 thread 00:24:04.697 00:24:04.697 test: (groupid=0, jobs=1): err= 0: pid=3837604: Tue Nov 26 19:25:27 2024 00:24:04.697 read: IOPS=11.9k, BW=46.7MiB/s (48.9MB/s)(93.5MiB/2005msec) 00:24:04.697 slat (nsec): min=1546, max=245346, avg=1735.61, stdev=2217.50 00:24:04.697 clat (usec): min=3103, max=9902, avg=5921.87, stdev=452.38 00:24:04.697 lat (usec): min=3138, max=9903, avg=5923.60, stdev=452.29 00:24:04.697 clat percentiles (usec): 00:24:04.697 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:24:04.697 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:24:04.697 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:24:04.697 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 7898], 99.95th=[ 9241], 00:24:04.697 | 99.99th=[ 9896] 00:24:04.697 bw ( KiB/s): min=46752, max=48288, per=99.94%, avg=47748.00, stdev=709.60, samples=4 00:24:04.697 iops : min=11688, max=12072, avg=11937.00, stdev=177.40, samples=4 00:24:04.697 write: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec); 0 zone resets 00:24:04.697 slat (nsec): min=1578, max=227367, avg=1791.55, stdev=1631.73 00:24:04.697 clat (usec): min=2435, max=9104, avg=4779.23, stdev=365.37 00:24:04.697 lat (usec): min=2450, max=9106, avg=4781.02, stdev=365.37 00:24:04.697 clat percentiles (usec): 00:24:04.697 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:24:04.697 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:24:04.697 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:24:04.697 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6980], 99.95th=[ 8455], 00:24:04.697 | 99.99th=[ 9110] 00:24:04.697 bw ( KiB/s): min=47296, max=47976, per=100.00%, avg=47568.00, stdev=310.32, samples=4 00:24:04.697 iops : min=11824, max=11994, avg=11892.00, stdev=77.58, samples=4 00:24:04.697 lat (msec) : 4=0.75%, 10=99.25% 00:24:04.697 cpu : usr=72.11%, sys=26.85%, ctx=100, majf=0, minf=2 00:24:04.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:04.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:04.697 issued rwts: total=23948,23837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:04.697 00:24:04.697 Run status group 0 (all jobs): 00:24:04.697 READ: bw=46.7MiB/s (48.9MB/s), 46.7MiB/s-46.7MiB/s (48.9MB/s-48.9MB/s), io=93.5MiB (98.1MB), run=2005-2005msec 00:24:04.697 WRITE: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:04.697 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:04.698 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:04.698 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:04.698 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:04.698 19:25:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:04.698 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:04.698 fio-3.35 00:24:04.698 Starting 1 thread 00:24:07.229 00:24:07.229 test: (groupid=0, jobs=1): err= 0: pid=3838172: Tue Nov 26 19:25:30 2024 00:24:07.229 read: IOPS=11.1k, BW=174MiB/s (183MB/s)(350MiB/2007msec) 00:24:07.229 slat (nsec): min=2497, max=87232, avg=2778.31, stdev=1251.07 00:24:07.229 clat (usec): min=1245, max=12631, avg=6587.66, stdev=1480.21 00:24:07.229 lat (usec): min=1248, max=12646, avg=6590.44, stdev=1480.33 00:24:07.229 clat percentiles (usec): 00:24:07.229 | 1.00th=[ 3654], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5276], 00:24:07.229 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 6980], 00:24:07.229 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8455], 95.00th=[ 9110], 00:24:07.229 | 99.00th=[10552], 99.50th=[11207], 99.90th=[12125], 99.95th=[12256], 00:24:07.229 | 99.99th=[12649] 00:24:07.229 bw ( KiB/s): min=86304, max=97280, per=50.70%, avg=90432.00, stdev=4828.15, samples=4 00:24:07.229 iops : min= 5394, max= 6080, avg=5652.00, stdev=301.76, samples=4 00:24:07.229 write: IOPS=6559, BW=102MiB/s (107MB/s)(184MiB/1799msec); 0 zone resets 00:24:07.229 slat (usec): min=28, max=392, avg=31.38, stdev= 7.50 00:24:07.229 clat (usec): min=3288, max=16129, avg=8561.68, stdev=1488.36 00:24:07.229 lat (usec): min=3318, max=16159, avg=8593.06, stdev=1489.74 00:24:07.229 clat percentiles (usec): 00:24:07.229 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:24:07.229 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:24:07.229 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:24:07.229 | 99.00th=[12518], 99.50th=[13042], 99.90th=[15401], 99.95th=[15795], 00:24:07.229 | 99.99th=[16057] 00:24:07.229 bw ( KiB/s): min=89920, max=101376, per=89.55%, avg=93984.00, stdev=5064.36, samples=4 00:24:07.229 iops : min= 5620, max= 6336, avg=5874.00, stdev=316.52, samples=4 00:24:07.229 lat (msec) : 2=0.05%, 4=1.62%, 10=91.64%, 20=6.68% 00:24:07.229 cpu : usr=84.70%, sys=14.61%, ctx=35, majf=0, minf=2 00:24:07.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:07.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:07.230 issued rwts: total=22374,11801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:07.230 00:24:07.230 Run status group 0 (all jobs): 00:24:07.230 READ: bw=174MiB/s (183MB/s), 174MiB/s-174MiB/s (183MB/s-183MB/s), io=350MiB (367MB), run=2007-2007msec 00:24:07.230 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=184MiB (193MB), run=1799-1799msec 00:24:07.230 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.488 rmmod nvme_tcp 00:24:07.488 rmmod nvme_fabrics 00:24:07.488 rmmod nvme_keyring 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3837076 ']' 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3837076 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3837076 ']' 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3837076 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3837076 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3837076' 00:24:07.488 killing process with pid 3837076 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3837076 00:24:07.488 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3837076 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.747 19:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.280 00:24:10.280 real 0m15.691s 00:24:10.280 user 0m46.054s 00:24:10.280 sys 0m6.488s 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.280 ************************************ 00:24:10.280 END TEST nvmf_fio_host 00:24:10.280 ************************************ 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.280 ************************************ 00:24:10.280 START TEST nvmf_failover 00:24:10.280 ************************************ 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:10.280 * Looking for test storage... 00:24:10.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:10.280 19:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.280 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:10.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.281 --rc genhtml_branch_coverage=1 00:24:10.281 --rc genhtml_function_coverage=1 00:24:10.281 --rc genhtml_legend=1 00:24:10.281 --rc geninfo_all_blocks=1 00:24:10.281 --rc geninfo_unexecuted_blocks=1 00:24:10.281 00:24:10.281 ' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:10.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.281 --rc genhtml_branch_coverage=1 00:24:10.281 --rc genhtml_function_coverage=1 00:24:10.281 --rc genhtml_legend=1 00:24:10.281 --rc geninfo_all_blocks=1 00:24:10.281 --rc geninfo_unexecuted_blocks=1 00:24:10.281 00:24:10.281 ' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:10.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.281 --rc genhtml_branch_coverage=1 00:24:10.281 --rc genhtml_function_coverage=1 00:24:10.281 --rc genhtml_legend=1 00:24:10.281 --rc geninfo_all_blocks=1 00:24:10.281 --rc geninfo_unexecuted_blocks=1 00:24:10.281 00:24:10.281 ' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:10.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.281 --rc genhtml_branch_coverage=1 00:24:10.281 --rc genhtml_function_coverage=1 00:24:10.281 --rc genhtml_legend=1 00:24:10.281 --rc geninfo_all_blocks=1 00:24:10.281 --rc geninfo_unexecuted_blocks=1 00:24:10.281 00:24:10.281 ' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.281 19:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:16.843 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:16.843 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:16.843 Found net devices under 0000:86:00.0: cvl_0_0 00:24:16.843 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:16.844 Found net devices under 0000:86:00.1: cvl_0_1 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:24:16.844 00:24:16.844 --- 10.0.0.2 ping statistics --- 00:24:16.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.844 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:24:16.844 00:24:16.844 --- 10.0.0.1 ping statistics --- 00:24:16.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.844 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3842011 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3842011 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3842011 ']' 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.844 19:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.844 [2024-11-26 19:25:39.044600] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:24:16.844 [2024-11-26 19:25:39.044644] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.844 [2024-11-26 19:25:39.124659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:16.844 [2024-11-26 19:25:39.168480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.844 [2024-11-26 19:25:39.168517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.844 [2024-11-26 19:25:39.168525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.844 [2024-11-26 19:25:39.168531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.844 [2024-11-26 19:25:39.168537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.844 [2024-11-26 19:25:39.169995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.844 [2024-11-26 19:25:39.170107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.844 [2024-11-26 19:25:39.170107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:16.844 [2024-11-26 19:25:39.478921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:16.844 Malloc0 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.844 19:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.102 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.361 [2024-11-26 19:25:40.284556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.361 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:17.619 [2024-11-26 19:25:40.497098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:17.619 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:17.619 [2024-11-26 19:25:40.701758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:17.619 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:17.619 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3842404 00:24:17.619 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.619 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3842404 /var/tmp/bdevperf.sock 00:24:17.619 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3842404 ']' 00:24:17.619 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.879 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.879 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.879 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.879 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.879 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.879 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:17.879 19:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:18.138 NVMe0n1 00:24:18.396 19:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:18.654 00:24:18.654 19:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:18.654 19:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3842426 00:24:18.654 19:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:19.590 19:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.849 [2024-11-26 19:25:42.858096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 [2024-11-26 19:25:42.858313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d2d0 is same with the state(6) to be set 00:24:19.849 19:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:23.136 19:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:23.136 00:24:23.136 19:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:23.396 [2024-11-26 19:25:46.412903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dfa0 is same with the state(6) to be set 00:24:23.396 [2024-11-26 19:25:46.412947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dfa0 is same with the state(6) to be set 00:24:23.396 [2024-11-26 19:25:46.412955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dfa0 is same with the state(6) to be set 00:24:23.396 [2024-11-26 19:25:46.412962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dfa0 is same with the state(6) to be set 00:24:23.396 [2024-11-26 19:25:46.412968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dfa0 is same with the state(6) to be set 00:24:23.396 [2024-11-26 19:25:46.412974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dfa0 is same with the state(6) to be set 00:24:23.396 [2024-11-26 19:25:46.412980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dfa0 is same with the state(6) to be set 00:24:23.396 19:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:26.683 19:25:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.683 [2024-11-26 19:25:49.623824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.683 19:25:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:27.619 19:25:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:27.922 [2024-11-26 19:25:50.836814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.922 [2024-11-26 19:25:50.836973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.836979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.836985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.836990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.836996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.837002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.837008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.837013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.837019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 [2024-11-26 19:25:50.837024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ece0 is same with the state(6) to be set 00:24:27.923 19:25:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3842426 00:24:34.498 { 00:24:34.498 "results": [ 00:24:34.498 { 00:24:34.498 "job": "NVMe0n1", 00:24:34.498 "core_mask": "0x1", 00:24:34.498 "workload": "verify", 00:24:34.498 "status": "finished", 00:24:34.498 "verify_range": { 00:24:34.498 "start": 0, 00:24:34.498 "length": 16384 00:24:34.498 }, 00:24:34.498 "queue_depth": 128, 00:24:34.498 "io_size": 4096, 00:24:34.498 "runtime": 15.011169, 00:24:34.498 "iops": 11284.064552201098, 00:24:34.498 "mibps": 44.07837715703554, 00:24:34.498 "io_failed": 10349, 00:24:34.498 "io_timeout": 0, 00:24:34.498 "avg_latency_us": 10668.497164306591, 00:24:34.498 "min_latency_us": 427.1542857142857, 00:24:34.499 "max_latency_us": 27962.02666666667 00:24:34.499 } 00:24:34.499 ], 00:24:34.499 "core_count": 1 00:24:34.499 } 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3842404 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3842404 ']' 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3842404 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3842404 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3842404' 00:24:34.499 killing process with pid 3842404 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3842404 00:24:34.499 19:25:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3842404 00:24:34.499 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:34.499 [2024-11-26 19:25:40.758183] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:24:34.499 [2024-11-26 19:25:40.758232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842404 ] 00:24:34.499 [2024-11-26 19:25:40.832398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.499 [2024-11-26 19:25:40.873511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.499 Running I/O for 15 seconds... 00:24:34.499 11404.00 IOPS, 44.55 MiB/s [2024-11-26T18:25:57.613Z] [2024-11-26 19:25:42.859127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.499 [2024-11-26 19:25:42.859160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.499 [2024-11-26 19:25:42.859177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.499 [2024-11-26 19:25:42.859191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.499 [2024-11-26 19:25:42.859206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x779370 is same with the state(6) to be set 00:24:34.499 [2024-11-26 19:25:42.859266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.499 [2024-11-26 19:25:42.859518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.499 [2024-11-26 19:25:42.859660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.499 [2024-11-26 19:25:42.859668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.859987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.859995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.500 [2024-11-26 19:25:42.860239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.500 [2024-11-26 19:25:42.860245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.501 [2024-11-26 19:25:42.860806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.501 [2024-11-26 19:25:42.860813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:42.860828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:42.860842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:42.860856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:42.860870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:42.860884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:42.860898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.860912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.860926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.860940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.860954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.860968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.860982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.860990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.860997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.861011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.861025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.861039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.861052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.861068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.861083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:42.861097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.502 [2024-11-26 19:25:42.861125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.502 [2024-11-26 19:25:42.861132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101640 len:8 PRP1 0x0 PRP2 0x0 00:24:34.502 [2024-11-26 19:25:42.861138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:42.861185] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:34.502 [2024-11-26 19:25:42.861195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:34.502 [2024-11-26 19:25:42.863962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:34.502 [2024-11-26 19:25:42.863989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x779370 (9): Bad file descriptor 00:24:34.502 [2024-11-26 19:25:42.892125] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:34.502 11338.50 IOPS, 44.29 MiB/s [2024-11-26T18:25:57.616Z] 11383.67 IOPS, 44.47 MiB/s [2024-11-26T18:25:57.616Z] 11440.00 IOPS, 44.69 MiB/s [2024-11-26T18:25:57.616Z] [2024-11-26 19:25:46.414143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:46.414178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:46.414207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:46.414223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:46.414239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:46.414254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:46.414269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:46.414284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:46.414299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:46.414314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:46.414329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:46.414345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.502 [2024-11-26 19:25:46.414360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:46.414375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:46.414390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.502 [2024-11-26 19:25:46.414406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.502 [2024-11-26 19:25:46.414414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.503 [2024-11-26 19:25:46.414924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.503 [2024-11-26 19:25:46.414930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.414939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.414946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.414954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.414963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.414971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.414977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.414985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.414993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.504 [2024-11-26 19:25:46.415296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.504 [2024-11-26 19:25:46.415322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45264 len:8 PRP1 0x0 PRP2 0x0 00:24:34.504 [2024-11-26 19:25:46.415328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.504 [2024-11-26 19:25:46.415374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.504 [2024-11-26 19:25:46.415388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.504 [2024-11-26 19:25:46.415402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.504 [2024-11-26 19:25:46.415416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x779370 is same with the state(6) to be set 00:24:34.504 [2024-11-26 19:25:46.415554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.504 [2024-11-26 19:25:46.415562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.504 [2024-11-26 19:25:46.415567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45272 len:8 PRP1 0x0 PRP2 0x0 00:24:34.504 [2024-11-26 19:25:46.415574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.504 [2024-11-26 19:25:46.415587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.504 [2024-11-26 19:25:46.415593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45280 len:8 PRP1 0x0 PRP2 0x0 00:24:34.504 [2024-11-26 19:25:46.415599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.504 [2024-11-26 19:25:46.415612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.504 [2024-11-26 19:25:46.415617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:24:34.504 [2024-11-26 19:25:46.415623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.504 [2024-11-26 19:25:46.415635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.504 [2024-11-26 19:25:46.415640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:24:34.504 [2024-11-26 19:25:46.415647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.504 [2024-11-26 19:25:46.415653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.504 [2024-11-26 19:25:46.415658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45304 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45312 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45320 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45328 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45336 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45344 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45352 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45360 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45368 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45376 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45384 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45392 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45400 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.415980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.415985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45408 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.415991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.415998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45416 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45424 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45432 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45440 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45448 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45456 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45464 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45472 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45480 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.505 [2024-11-26 19:25:46.416212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.505 [2024-11-26 19:25:46.416216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.505 [2024-11-26 19:25:46.416222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45488 len:8 PRP1 0x0 PRP2 0x0 00:24:34.505 [2024-11-26 19:25:46.416228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44536 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44544 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44552 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44560 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44568 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44576 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44584 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45496 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44592 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44600 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.416472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.416477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.416483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44608 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.416489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44616 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44624 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44632 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44640 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44648 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44656 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44664 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44672 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44680 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44688 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44696 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.506 [2024-11-26 19:25:46.427466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44704 len:8 PRP1 0x0 PRP2 0x0 00:24:34.506 [2024-11-26 19:25:46.427472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.506 [2024-11-26 19:25:46.427479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.506 [2024-11-26 19:25:46.427484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44712 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44720 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44728 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44736 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44744 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44752 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44480 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44488 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44496 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44504 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44512 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44520 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44528 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44760 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44768 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44776 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44784 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44792 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44800 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44808 len:8 PRP1 0x0 PRP2 0x0 00:24:34.507 [2024-11-26 19:25:46.427938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.507 [2024-11-26 19:25:46.427944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.507 [2024-11-26 19:25:46.427949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.507 [2024-11-26 19:25:46.427954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.427961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.427983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.427989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.427997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44856 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44880 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44888 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44896 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44904 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44912 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44920 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44936 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44952 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44960 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44968 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44976 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44984 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44992 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.508 [2024-11-26 19:25:46.428663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.508 [2024-11-26 19:25:46.428674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.508 [2024-11-26 19:25:46.428681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45000 len:8 PRP1 0x0 PRP2 0x0 00:24:34.508 [2024-11-26 19:25:46.428690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.428698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.428704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.428712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45008 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.428720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.428730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.428736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.428743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45016 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.428752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.428760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.428768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.428775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45024 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.428783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.428793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.428800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.428808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45032 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.428816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.428824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.428831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.428838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45040 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.428846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.428855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.428861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.428868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45048 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.428877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.428885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.428891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.428898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45056 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.428907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.428916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45064 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45072 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45080 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45088 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45096 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45104 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45112 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45120 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45128 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45136 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45144 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45152 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45160 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45168 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45176 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.509 [2024-11-26 19:25:46.435946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45184 len:8 PRP1 0x0 PRP2 0x0 00:24:34.509 [2024-11-26 19:25:46.435954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.509 [2024-11-26 19:25:46.435963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.509 [2024-11-26 19:25:46.435970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.435977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45192 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.435985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.435993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45200 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45208 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45216 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45224 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45232 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45240 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45248 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45256 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.510 [2024-11-26 19:25:46.436247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.510 [2024-11-26 19:25:46.436254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45264 len:8 PRP1 0x0 PRP2 0x0 00:24:34.510 [2024-11-26 19:25:46.436262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:46.436311] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:34.510 [2024-11-26 19:25:46.436322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:34.510 [2024-11-26 19:25:46.436360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x779370 (9): Bad file descriptor 00:24:34.510 [2024-11-26 19:25:46.441002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:34.510 [2024-11-26 19:25:46.594418] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:34.510 11008.60 IOPS, 43.00 MiB/s [2024-11-26T18:25:57.624Z] 11090.17 IOPS, 43.32 MiB/s [2024-11-26T18:25:57.624Z] 11127.86 IOPS, 43.47 MiB/s [2024-11-26T18:25:57.624Z] 11176.62 IOPS, 43.66 MiB/s [2024-11-26T18:25:57.624Z] 11228.56 IOPS, 43.86 MiB/s [2024-11-26T18:25:57.624Z] [2024-11-26 19:25:50.838591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.510 [2024-11-26 19:25:50.838918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.510 [2024-11-26 19:25:50.838926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.838933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.838940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.838947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.838954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.838962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.838969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.838976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.838985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.838991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.838999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.839005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.839019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.839034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.839048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.839064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.511 [2024-11-26 19:25:50.839078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.511 [2024-11-26 19:25:50.839497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.511 [2024-11-26 19:25:50.839503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.839987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.839994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.512 [2024-11-26 19:25:50.840000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.512 [2024-11-26 19:25:50.840027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.512 [2024-11-26 19:25:50.840036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99640 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99648 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99656 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99664 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99672 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99680 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99688 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99696 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99704 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99712 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99720 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99728 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99736 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99744 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99752 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99760 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99768 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99776 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99784 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99792 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99800 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99808 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99816 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.513 [2024-11-26 19:25:50.840555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.513 [2024-11-26 19:25:50.840560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99824 len:8 PRP1 0x0 PRP2 0x0 00:24:34.513 [2024-11-26 19:25:50.840566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.513 [2024-11-26 19:25:50.840572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.514 [2024-11-26 19:25:50.840577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.514 [2024-11-26 19:25:50.840583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99832 len:8 PRP1 0x0 PRP2 0x0 00:24:34.514 [2024-11-26 19:25:50.840589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.840595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.514 [2024-11-26 19:25:50.840600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.514 [2024-11-26 19:25:50.840605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99840 len:8 PRP1 0x0 PRP2 0x0 00:24:34.514 [2024-11-26 19:25:50.840612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.840619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.514 [2024-11-26 19:25:50.840624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.514 [2024-11-26 19:25:50.840629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99848 len:8 PRP1 0x0 PRP2 0x0 00:24:34.514 [2024-11-26 19:25:50.840636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.840643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.514 [2024-11-26 19:25:50.840647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.514 [2024-11-26 19:25:50.840652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99856 len:8 PRP1 0x0 PRP2 0x0 00:24:34.514 [2024-11-26 19:25:50.840658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.840665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.514 [2024-11-26 19:25:50.840672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.514 [2024-11-26 19:25:50.840678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99864 len:8 PRP1 0x0 PRP2 0x0 00:24:34.514 [2024-11-26 19:25:50.840684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.840690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.514 [2024-11-26 19:25:50.850092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.514 [2024-11-26 19:25:50.850104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99872 len:8 PRP1 0x0 PRP2 0x0 00:24:34.514 [2024-11-26 19:25:50.850114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.850124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.514 [2024-11-26 19:25:50.850130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.514 [2024-11-26 19:25:50.850137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99104 len:8 PRP1 0x0 PRP2 0x0 00:24:34.514 [2024-11-26 19:25:50.850145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.850154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.514 [2024-11-26 19:25:50.850160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.514 [2024-11-26 19:25:50.850167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99112 len:8 PRP1 0x0 PRP2 0x0 00:24:34.514 [2024-11-26 19:25:50.850175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.850227] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:34.514 [2024-11-26 19:25:50.850254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.514 [2024-11-26 19:25:50.850264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.850274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.514 [2024-11-26 19:25:50.850285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.850294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.514 [2024-11-26 19:25:50.850303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.850312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.514 [2024-11-26 19:25:50.850320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.514 [2024-11-26 19:25:50.850328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:34.514 [2024-11-26 19:25:50.850357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x779370 (9): Bad file descriptor 00:24:34.514 [2024-11-26 19:25:50.854105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:34.514 [2024-11-26 19:25:50.884520] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:34.514 11206.80 IOPS, 43.78 MiB/s [2024-11-26T18:25:57.628Z] 11220.82 IOPS, 43.83 MiB/s [2024-11-26T18:25:57.628Z] 11244.00 IOPS, 43.92 MiB/s [2024-11-26T18:25:57.628Z] 11243.38 IOPS, 43.92 MiB/s [2024-11-26T18:25:57.628Z] 11269.21 IOPS, 44.02 MiB/s [2024-11-26T18:25:57.628Z] 11283.93 IOPS, 44.08 MiB/s 00:24:34.514 Latency(us) 00:24:34.514 [2024-11-26T18:25:57.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.514 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:34.514 Verification LBA range: start 0x0 length 0x4000 00:24:34.514 NVMe0n1 : 15.01 11284.06 44.08 689.42 0.00 10668.50 427.15 27962.03 00:24:34.514 [2024-11-26T18:25:57.628Z] =================================================================================================================== 00:24:34.514 [2024-11-26T18:25:57.628Z] Total : 11284.06 44.08 689.42 0.00 10668.50 427.15 27962.03 00:24:34.514 Received shutdown signal, test time was about 15.000000 seconds 00:24:34.514 00:24:34.514 Latency(us) 00:24:34.514 [2024-11-26T18:25:57.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.514 [2024-11-26T18:25:57.628Z] =================================================================================================================== 00:24:34.514 [2024-11-26T18:25:57.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3844943 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3844943 /var/tmp/bdevperf.sock 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3844943 ']' 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:34.514 [2024-11-26 19:25:57.484948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:34.514 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:34.773 [2024-11-26 19:25:57.693497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:34.773 19:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:35.031 NVMe0n1 00:24:35.031 19:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:35.597 00:24:35.597 19:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:35.597 00:24:35.855 19:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:35.855 19:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:35.855 19:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.114 19:25:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:39.540 19:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.540 19:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:39.540 19:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.540 19:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3845865 00:24:39.540 19:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3845865 00:24:40.533 { 00:24:40.533 "results": [ 00:24:40.533 { 00:24:40.533 "job": "NVMe0n1", 00:24:40.533 "core_mask": "0x1", 00:24:40.533 "workload": "verify", 00:24:40.533 "status": "finished", 00:24:40.533 "verify_range": { 00:24:40.533 "start": 0, 00:24:40.533 "length": 16384 00:24:40.533 }, 00:24:40.533 "queue_depth": 128, 00:24:40.533 "io_size": 4096, 00:24:40.533 "runtime": 1.005449, 00:24:40.533 "iops": 11439.66526397659, 00:24:40.533 "mibps": 44.68619243740856, 00:24:40.533 "io_failed": 0, 00:24:40.533 "io_timeout": 0, 00:24:40.533 "avg_latency_us": 11142.149672355117, 00:24:40.533 "min_latency_us": 2356.175238095238, 00:24:40.533 "max_latency_us": 9424.700952380952 00:24:40.533 } 00:24:40.533 ], 00:24:40.533 "core_count": 1 00:24:40.533 } 00:24:40.533 19:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:40.533 [2024-11-26 19:25:57.091332] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:24:40.533 [2024-11-26 19:25:57.091382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844943 ] 00:24:40.533 [2024-11-26 19:25:57.165407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.533 [2024-11-26 19:25:57.202777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.533 [2024-11-26 19:25:59.132997] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:40.534 [2024-11-26 19:25:59.133042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.534 [2024-11-26 19:25:59.133053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.534 [2024-11-26 19:25:59.133062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.534 [2024-11-26 19:25:59.133069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.534 [2024-11-26 19:25:59.133077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.534 [2024-11-26 19:25:59.133083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.534 [2024-11-26 19:25:59.133090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.534 [2024-11-26 19:25:59.133097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.534 [2024-11-26 19:25:59.133107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:40.534 [2024-11-26 19:25:59.133132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:40.534 [2024-11-26 19:25:59.133146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a43370 (9): Bad file descriptor 00:24:40.534 [2024-11-26 19:25:59.179592] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:40.534 Running I/O for 1 seconds... 00:24:40.534 11374.00 IOPS, 44.43 MiB/s 00:24:40.534 Latency(us) 00:24:40.534 [2024-11-26T18:26:03.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.534 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:40.534 Verification LBA range: start 0x0 length 0x4000 00:24:40.534 NVMe0n1 : 1.01 11439.67 44.69 0.00 0.00 11142.15 2356.18 9424.70 00:24:40.534 [2024-11-26T18:26:03.648Z] =================================================================================================================== 00:24:40.534 [2024-11-26T18:26:03.648Z] Total : 11439.67 44.69 0.00 0.00 11142.15 2356.18 9424.70 00:24:40.534 19:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.534 19:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:40.793 19:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.793 19:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.793 19:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:41.052 19:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.311 19:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3844943 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3844943 ']' 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3844943 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844943 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844943' 00:24:44.597 killing process with pid 3844943 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3844943 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3844943 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:44.597 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:44.856 rmmod nvme_tcp 00:24:44.856 rmmod nvme_fabrics 00:24:44.856 rmmod nvme_keyring 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3842011 ']' 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3842011 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3842011 ']' 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3842011 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3842011 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3842011' 00:24:44.856 killing process with pid 3842011 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3842011 00:24:44.856 19:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3842011 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.116 19:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:47.651 00:24:47.651 real 0m37.370s 00:24:47.651 user 1m58.194s 00:24:47.651 sys 0m7.934s 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.651 ************************************ 00:24:47.651 END TEST nvmf_failover 00:24:47.651 ************************************ 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.651 ************************************ 00:24:47.651 START TEST nvmf_host_discovery 00:24:47.651 ************************************ 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:47.651 * Looking for test storage... 00:24:47.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.651 --rc genhtml_branch_coverage=1 00:24:47.651 --rc genhtml_function_coverage=1 00:24:47.651 --rc genhtml_legend=1 00:24:47.651 --rc geninfo_all_blocks=1 00:24:47.651 --rc geninfo_unexecuted_blocks=1 00:24:47.651 00:24:47.651 ' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.651 --rc genhtml_branch_coverage=1 00:24:47.651 --rc genhtml_function_coverage=1 00:24:47.651 --rc genhtml_legend=1 00:24:47.651 --rc geninfo_all_blocks=1 00:24:47.651 --rc geninfo_unexecuted_blocks=1 00:24:47.651 00:24:47.651 ' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.651 --rc genhtml_branch_coverage=1 00:24:47.651 --rc genhtml_function_coverage=1 00:24:47.651 --rc genhtml_legend=1 00:24:47.651 --rc geninfo_all_blocks=1 00:24:47.651 --rc geninfo_unexecuted_blocks=1 00:24:47.651 00:24:47.651 ' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.651 --rc genhtml_branch_coverage=1 00:24:47.651 --rc genhtml_function_coverage=1 00:24:47.651 --rc genhtml_legend=1 00:24:47.651 --rc geninfo_all_blocks=1 00:24:47.651 --rc geninfo_unexecuted_blocks=1 00:24:47.651 00:24:47.651 ' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.651 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.652 19:26:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.222 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:54.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:54.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:54.223 Found net devices under 0000:86:00.0: cvl_0_0 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:54.223 Found net devices under 0000:86:00.1: cvl_0_1 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:24:54.223 00:24:54.223 --- 10.0.0.2 ping statistics --- 00:24:54.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.223 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:24:54.223 00:24:54.223 --- 10.0.0.1 ping statistics --- 00:24:54.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.223 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.223 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3850319 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3850319 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3850319 ']' 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 [2024-11-26 19:26:16.486711] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:24:54.224 [2024-11-26 19:26:16.486761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.224 [2024-11-26 19:26:16.564268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.224 [2024-11-26 19:26:16.603747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.224 [2024-11-26 19:26:16.603782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.224 [2024-11-26 19:26:16.603788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.224 [2024-11-26 19:26:16.603794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.224 [2024-11-26 19:26:16.603799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.224 [2024-11-26 19:26:16.604352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 [2024-11-26 19:26:16.743573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 [2024-11-26 19:26:16.755785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 null0 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 null1 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3850340 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3850340 /tmp/host.sock 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3850340 ']' 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:54.224 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.224 19:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 [2024-11-26 19:26:16.836207] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:24:54.224 [2024-11-26 19:26:16.836250] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850340 ] 00:24:54.224 [2024-11-26 19:26:16.908546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.224 [2024-11-26 19:26:16.950839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:54.224 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.225 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.485 [2024-11-26 19:26:17.353307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.485 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:54.486 19:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:55.054 [2024-11-26 19:26:18.110801] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:55.054 [2024-11-26 19:26:18.110818] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:55.055 [2024-11-26 19:26:18.110831] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:55.313 [2024-11-26 19:26:18.198094] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:55.313 [2024-11-26 19:26:18.299894] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:55.313 [2024-11-26 19:26:18.300655] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a25e30:1 started. 00:24:55.313 [2024-11-26 19:26:18.302023] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:55.313 [2024-11-26 19:26:18.302038] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:55.313 [2024-11-26 19:26:18.308894] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a25e30 was disconnected and freed. delete nvme_qpair. 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:55.575 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.834 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.835 19:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.095 [2024-11-26 19:26:19.009485] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a262f0:1 started. 00:24:56.095 [2024-11-26 19:26:19.011829] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a262f0 was disconnected and freed. delete nvme_qpair. 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.095 [2024-11-26 19:26:19.090285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:56.095 [2024-11-26 19:26:19.090773] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:56.095 [2024-11-26 19:26:19.090791] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.095 [2024-11-26 19:26:19.177370] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.095 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:56.353 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.353 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:56.353 19:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:56.353 [2024-11-26 19:26:19.276099] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:56.353 [2024-11-26 19:26:19.276133] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:56.353 [2024-11-26 19:26:19.276141] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:56.353 [2024-11-26 19:26:19.276146] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.293 [2024-11-26 19:26:20.326229] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:57.293 [2024-11-26 19:26:20.326253] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:57.293 [2024-11-26 19:26:20.333793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.293 [2024-11-26 19:26:20.333814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.293 [2024-11-26 19:26:20.333823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.293 [2024-11-26 19:26:20.333831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.293 [2024-11-26 19:26:20.333838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.293 [2024-11-26 19:26:20.333845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.293 [2024-11-26 19:26:20.333855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.293 [2024-11-26 19:26:20.333862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.293 [2024-11-26 19:26:20.333870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:57.293 [2024-11-26 19:26:20.343805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.293 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.293 [2024-11-26 19:26:20.353839] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.293 [2024-11-26 19:26:20.353850] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.293 [2024-11-26 19:26:20.353854] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.293 [2024-11-26 19:26:20.353859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.293 [2024-11-26 19:26:20.353877] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.293 [2024-11-26 19:26:20.354062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.293 [2024-11-26 19:26:20.354077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.293 [2024-11-26 19:26:20.354085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.293 [2024-11-26 19:26:20.354097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.293 [2024-11-26 19:26:20.354112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.293 [2024-11-26 19:26:20.354119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.293 [2024-11-26 19:26:20.354128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.293 [2024-11-26 19:26:20.354134] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.293 [2024-11-26 19:26:20.354138] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.294 [2024-11-26 19:26:20.354143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.294 [2024-11-26 19:26:20.363906] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.294 [2024-11-26 19:26:20.363916] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.294 [2024-11-26 19:26:20.363920] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.363924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.294 [2024-11-26 19:26:20.363941] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.364049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.294 [2024-11-26 19:26:20.364060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.294 [2024-11-26 19:26:20.364068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.294 [2024-11-26 19:26:20.364078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.294 [2024-11-26 19:26:20.364087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.294 [2024-11-26 19:26:20.364093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.294 [2024-11-26 19:26:20.364099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.294 [2024-11-26 19:26:20.364105] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.294 [2024-11-26 19:26:20.364109] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.294 [2024-11-26 19:26:20.364113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.294 [2024-11-26 19:26:20.373972] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.294 [2024-11-26 19:26:20.373986] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.294 [2024-11-26 19:26:20.373990] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.373995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.294 [2024-11-26 19:26:20.374010] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.374122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.294 [2024-11-26 19:26:20.374134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.294 [2024-11-26 19:26:20.374141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.294 [2024-11-26 19:26:20.374152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.294 [2024-11-26 19:26:20.374167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.294 [2024-11-26 19:26:20.374173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.294 [2024-11-26 19:26:20.374180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.294 [2024-11-26 19:26:20.374186] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.294 [2024-11-26 19:26:20.374191] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.294 [2024-11-26 19:26:20.374194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:57.294 [2024-11-26 19:26:20.384040] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.294 [2024-11-26 19:26:20.384052] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.294 [2024-11-26 19:26:20.384057] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.384061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.294 [2024-11-26 19:26:20.384074] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.384175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.294 [2024-11-26 19:26:20.384186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.294 [2024-11-26 19:26:20.384193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.294 [2024-11-26 19:26:20.384203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.294 [2024-11-26 19:26:20.384212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.294 [2024-11-26 19:26:20.384219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.294 [2024-11-26 19:26:20.384226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.294 [2024-11-26 19:26:20.384231] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.294 [2024-11-26 19:26:20.384235] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.294 [2024-11-26 19:26:20.384239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.294 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.294 [2024-11-26 19:26:20.394104] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.294 [2024-11-26 19:26:20.394117] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.294 [2024-11-26 19:26:20.394121] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.394125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.294 [2024-11-26 19:26:20.394140] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.394248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.294 [2024-11-26 19:26:20.394264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.294 [2024-11-26 19:26:20.394272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.294 [2024-11-26 19:26:20.394282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.294 [2024-11-26 19:26:20.394307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.294 [2024-11-26 19:26:20.394315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.294 [2024-11-26 19:26:20.394322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.294 [2024-11-26 19:26:20.394327] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.294 [2024-11-26 19:26:20.394332] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.294 [2024-11-26 19:26:20.394335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.294 [2024-11-26 19:26:20.404171] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.294 [2024-11-26 19:26:20.404181] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.294 [2024-11-26 19:26:20.404185] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.294 [2024-11-26 19:26:20.404190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.294 [2024-11-26 19:26:20.404203] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.404413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-11-26 19:26:20.404425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-11-26 19:26:20.404432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.555 [2024-11-26 19:26:20.404443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.555 [2024-11-26 19:26:20.404453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.555 [2024-11-26 19:26:20.404459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.555 [2024-11-26 19:26:20.404466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.555 [2024-11-26 19:26:20.404471] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.555 [2024-11-26 19:26:20.404476] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.555 [2024-11-26 19:26:20.404480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.555 [2024-11-26 19:26:20.414234] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.555 [2024-11-26 19:26:20.414246] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.555 [2024-11-26 19:26:20.414250] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.414255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.555 [2024-11-26 19:26:20.414268] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.414429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-11-26 19:26:20.414440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-11-26 19:26:20.414448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.555 [2024-11-26 19:26:20.414457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.555 [2024-11-26 19:26:20.414472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.555 [2024-11-26 19:26:20.414479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.555 [2024-11-26 19:26:20.414486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.555 [2024-11-26 19:26:20.414491] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.555 [2024-11-26 19:26:20.414495] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.555 [2024-11-26 19:26:20.414499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.555 [2024-11-26 19:26:20.424299] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.555 [2024-11-26 19:26:20.424311] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.555 [2024-11-26 19:26:20.424315] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.424319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.555 [2024-11-26 19:26:20.424332] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.424437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-11-26 19:26:20.424449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-11-26 19:26:20.424456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.555 [2024-11-26 19:26:20.424466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.555 [2024-11-26 19:26:20.424475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.555 [2024-11-26 19:26:20.424481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.555 [2024-11-26 19:26:20.424487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.555 [2024-11-26 19:26:20.424492] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.555 [2024-11-26 19:26:20.424497] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.555 [2024-11-26 19:26:20.424500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:57.555 [2024-11-26 19:26:20.434363] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.555 [2024-11-26 19:26:20.434375] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.555 [2024-11-26 19:26:20.434379] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.434383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:57.555 [2024-11-26 19:26:20.434395] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.434499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-11-26 19:26:20.434511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-11-26 19:26:20.434518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.555 [2024-11-26 19:26:20.434528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.555 [2024-11-26 19:26:20.434547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.555 [2024-11-26 19:26:20.434554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.555 [2024-11-26 19:26:20.434560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.555 [2024-11-26 19:26:20.434565] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.555 [2024-11-26 19:26:20.434570] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.555 [2024-11-26 19:26:20.434574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.555 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:57.555 [2024-11-26 19:26:20.444426] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.555 [2024-11-26 19:26:20.444440] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.555 [2024-11-26 19:26:20.444445] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.444451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.555 [2024-11-26 19:26:20.444465] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.555 [2024-11-26 19:26:20.444619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-11-26 19:26:20.444634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6390 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-11-26 19:26:20.444641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6390 is same with the state(6) to be set 00:24:57.555 [2024-11-26 19:26:20.444651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6390 (9): Bad file descriptor 00:24:57.555 [2024-11-26 19:26:20.444660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.556 [2024-11-26 19:26:20.444666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.556 [2024-11-26 19:26:20.444677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.556 [2024-11-26 19:26:20.444683] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.556 [2024-11-26 19:26:20.444687] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.556 [2024-11-26 19:26:20.444691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.556 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.556 [2024-11-26 19:26:20.452042] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:57.556 [2024-11-26 19:26:20.452058] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:57.556 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:24:57.556 19:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.493 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:58.494 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.494 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:58.752 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.752 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:58.752 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.752 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:58.752 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:58.752 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.752 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.753 19:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.688 [2024-11-26 19:26:22.751365] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:59.688 [2024-11-26 19:26:22.751382] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:59.688 [2024-11-26 19:26:22.751392] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:59.947 [2024-11-26 19:26:22.839655] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:00.207 [2024-11-26 19:26:23.067801] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:00.207 [2024-11-26 19:26:23.068387] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1a2b330:1 started. 00:25:00.207 [2024-11-26 19:26:23.070044] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:00.207 [2024-11-26 19:26:23.070067] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:00.207 [2024-11-26 19:26:23.071390] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1a2b330 was disconnected and freed. delete nvme_qpair. 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.207 request: 00:25:00.207 { 00:25:00.207 "name": "nvme", 00:25:00.207 "trtype": "tcp", 00:25:00.207 "traddr": "10.0.0.2", 00:25:00.207 "adrfam": "ipv4", 00:25:00.207 "trsvcid": "8009", 00:25:00.207 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:00.207 "wait_for_attach": true, 00:25:00.207 "method": "bdev_nvme_start_discovery", 00:25:00.207 "req_id": 1 00:25:00.207 } 00:25:00.207 Got JSON-RPC error response 00:25:00.207 response: 00:25:00.207 { 00:25:00.207 "code": -17, 00:25:00.207 "message": "File exists" 00:25:00.207 } 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.207 request: 00:25:00.207 { 00:25:00.207 "name": "nvme_second", 00:25:00.207 "trtype": "tcp", 00:25:00.207 "traddr": "10.0.0.2", 00:25:00.207 "adrfam": "ipv4", 00:25:00.207 "trsvcid": "8009", 00:25:00.207 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:00.207 "wait_for_attach": true, 00:25:00.207 "method": "bdev_nvme_start_discovery", 00:25:00.207 "req_id": 1 00:25:00.207 } 00:25:00.207 Got JSON-RPC error response 00:25:00.207 response: 00:25:00.207 { 00:25:00.207 "code": -17, 00:25:00.207 "message": "File exists" 00:25:00.207 } 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.207 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.208 19:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.583 [2024-11-26 19:26:24.310777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-11-26 19:26:24.310805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2bc50 with addr=10.0.0.2, port=8010 00:25:01.583 [2024-11-26 19:26:24.310820] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:01.583 [2024-11-26 19:26:24.310827] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:01.583 [2024-11-26 19:26:24.310833] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:02.524 [2024-11-26 19:26:25.313238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.524 [2024-11-26 19:26:25.313261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2bc50 with addr=10.0.0.2, port=8010 00:25:02.524 [2024-11-26 19:26:25.313273] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:02.524 [2024-11-26 19:26:25.313279] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:02.524 [2024-11-26 19:26:25.313285] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:03.459 [2024-11-26 19:26:26.315434] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:03.459 request: 00:25:03.459 { 00:25:03.459 "name": "nvme_second", 00:25:03.459 "trtype": "tcp", 00:25:03.459 "traddr": "10.0.0.2", 00:25:03.459 "adrfam": "ipv4", 00:25:03.459 "trsvcid": "8010", 00:25:03.459 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:03.459 "wait_for_attach": false, 00:25:03.459 "attach_timeout_ms": 3000, 00:25:03.459 "method": "bdev_nvme_start_discovery", 00:25:03.459 "req_id": 1 00:25:03.459 } 00:25:03.459 Got JSON-RPC error response 00:25:03.459 response: 00:25:03.459 { 00:25:03.459 "code": -110, 00:25:03.459 "message": "Connection timed out" 00:25:03.459 } 00:25:03.459 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:03.459 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:03.459 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:03.459 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:03.459 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:03.459 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3850340 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.460 rmmod nvme_tcp 00:25:03.460 rmmod nvme_fabrics 00:25:03.460 rmmod nvme_keyring 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3850319 ']' 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3850319 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3850319 ']' 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3850319 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3850319 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3850319' 00:25:03.460 killing process with pid 3850319 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3850319 00:25:03.460 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3850319 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.719 19:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.622 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.622 00:25:05.622 real 0m18.418s 00:25:05.622 user 0m22.845s 00:25:05.622 sys 0m5.970s 00:25:05.622 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.622 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.622 ************************************ 00:25:05.622 END TEST nvmf_host_discovery 00:25:05.622 ************************************ 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.882 ************************************ 00:25:05.882 START TEST nvmf_host_multipath_status 00:25:05.882 ************************************ 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:05.882 * Looking for test storage... 00:25:05.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:05.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.882 --rc genhtml_branch_coverage=1 00:25:05.882 --rc genhtml_function_coverage=1 00:25:05.882 --rc genhtml_legend=1 00:25:05.882 --rc geninfo_all_blocks=1 00:25:05.882 --rc geninfo_unexecuted_blocks=1 00:25:05.882 00:25:05.882 ' 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:05.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.882 --rc genhtml_branch_coverage=1 00:25:05.882 --rc genhtml_function_coverage=1 00:25:05.882 --rc genhtml_legend=1 00:25:05.882 --rc geninfo_all_blocks=1 00:25:05.882 --rc geninfo_unexecuted_blocks=1 00:25:05.882 00:25:05.882 ' 00:25:05.882 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:05.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.883 --rc genhtml_branch_coverage=1 00:25:05.883 --rc genhtml_function_coverage=1 00:25:05.883 --rc genhtml_legend=1 00:25:05.883 --rc geninfo_all_blocks=1 00:25:05.883 --rc geninfo_unexecuted_blocks=1 00:25:05.883 00:25:05.883 ' 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:05.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.883 --rc genhtml_branch_coverage=1 00:25:05.883 --rc genhtml_function_coverage=1 00:25:05.883 --rc genhtml_legend=1 00:25:05.883 --rc geninfo_all_blocks=1 00:25:05.883 --rc geninfo_unexecuted_blocks=1 00:25:05.883 00:25:05.883 ' 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.883 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.143 19:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.143 19:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:12.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:12.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:12.743 Found net devices under 0000:86:00.0: cvl_0_0 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:12.743 Found net devices under 0000:86:00.1: cvl_0_1 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:12.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:25:12.743 00:25:12.743 --- 10.0.0.2 ping statistics --- 00:25:12.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.743 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:25:12.743 00:25:12.743 --- 10.0.0.1 ping statistics --- 00:25:12.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.743 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:12.743 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3855645 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3855645 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3855645 ']' 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.744 19:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:12.744 [2024-11-26 19:26:34.964445] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:25:12.744 [2024-11-26 19:26:34.964496] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.744 [2024-11-26 19:26:35.043035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:12.744 [2024-11-26 19:26:35.082737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.744 [2024-11-26 19:26:35.082776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.744 [2024-11-26 19:26:35.082783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.744 [2024-11-26 19:26:35.082790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.744 [2024-11-26 19:26:35.082797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.744 [2024-11-26 19:26:35.083950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.744 [2024-11-26 19:26:35.083951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3855645 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:12.744 [2024-11-26 19:26:35.392533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:12.744 Malloc0 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:12.744 19:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.001 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.258 [2024-11-26 19:26:36.215392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.258 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:13.516 [2024-11-26 19:26:36.403855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3855899 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3855899 /var/tmp/bdevperf.sock 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3855899 ']' 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.516 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.773 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.773 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:13.773 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:13.773 19:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:14.339 Nvme0n1 00:25:14.339 19:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:14.908 Nvme0n1 00:25:14.908 19:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:14.908 19:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:16.811 19:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:16.811 19:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:17.070 19:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:17.070 19:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:18.448 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:18.448 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:18.448 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.448 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.448 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.448 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:18.448 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.448 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.707 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.707 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.707 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.707 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.707 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.707 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.707 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.707 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.966 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.966 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.966 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.966 19:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.225 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.225 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:19.225 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.225 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.482 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.482 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:19.482 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:19.740 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:19.740 19:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:21.124 19:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:21.124 19:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:21.124 19:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.124 19:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.124 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.124 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:21.124 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.124 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.124 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.124 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.124 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.124 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.383 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.383 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.383 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.383 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.643 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.643 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.643 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.643 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.902 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.902 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.902 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.902 19:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.160 19:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.161 19:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:22.161 19:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:22.420 19:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:22.420 19:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:23.799 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:23.799 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.799 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.799 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.799 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.799 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:23.799 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.799 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.059 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.059 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.059 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.059 19:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.059 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.059 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.059 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.059 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.318 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.318 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.318 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.318 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.575 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.575 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.575 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.575 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.834 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.834 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:24.834 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.093 19:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:25.093 19:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:26.470 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:26.470 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:26.470 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.470 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.470 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.470 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:26.470 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.470 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.729 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.729 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:26.729 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.729 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:26.729 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.729 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:26.729 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.729 19:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.987 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.987 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.987 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.987 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.246 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.246 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:27.246 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.246 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.505 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.505 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:27.505 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:27.763 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:27.763 19:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:29.140 19:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:29.140 19:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:29.140 19:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.140 19:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.140 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.140 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.140 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.140 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.140 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.140 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.140 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.140 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.399 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.399 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.399 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.399 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.658 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.658 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:29.658 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.658 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.918 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.918 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:29.918 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.918 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:29.918 19:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.918 19:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:29.918 19:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:30.177 19:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:30.436 19:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:31.373 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:31.373 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:31.373 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.373 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.632 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.632 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:31.632 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.632 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.891 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.891 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:31.891 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.891 19:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:31.891 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.891 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.150 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.150 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.150 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.150 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:32.150 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.150 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.409 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.409 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.409 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.409 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.667 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.667 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:32.927 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:32.927 19:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:33.186 19:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.186 19:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:34.565 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:34.565 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:34.565 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.565 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.565 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.565 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:34.565 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.565 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.823 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.823 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.823 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.823 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.823 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.823 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.823 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.823 19:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.083 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.083 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.083 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.083 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.343 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.343 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.343 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.343 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.602 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.602 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:35.602 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:35.861 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:35.861 19:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:36.798 19:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:36.798 19:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:37.057 19:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.057 19:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.057 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.057 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:37.057 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.057 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.316 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.316 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.316 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.316 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.576 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.576 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.576 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.576 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.835 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.835 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:37.835 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.835 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:37.835 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.835 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:37.835 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.835 19:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.095 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.095 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:38.095 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:38.354 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:38.614 19:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:39.550 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:39.550 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:39.550 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.550 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.809 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.809 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:39.809 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.809 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.068 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.068 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.068 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.068 19:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.068 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.068 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.068 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.068 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.327 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.327 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.327 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.327 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.585 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.585 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:40.585 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.585 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.844 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.844 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:40.844 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:41.104 19:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:41.104 19:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:42.479 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:42.479 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:42.479 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.479 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.479 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.479 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:42.479 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.479 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.768 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.768 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.768 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.768 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.768 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.768 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.768 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.768 19:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.026 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.026 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.026 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.026 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.284 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.284 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:43.284 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.284 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.542 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.542 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3855899 00:25:43.542 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3855899 ']' 00:25:43.542 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3855899 00:25:43.542 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:43.542 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.543 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3855899 00:25:43.543 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:43.543 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:43.543 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3855899' 00:25:43.543 killing process with pid 3855899 00:25:43.543 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3855899 00:25:43.543 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3855899 00:25:43.543 { 00:25:43.543 "results": [ 00:25:43.543 { 00:25:43.543 "job": "Nvme0n1", 00:25:43.543 "core_mask": "0x4", 00:25:43.543 "workload": "verify", 00:25:43.543 "status": "terminated", 00:25:43.543 "verify_range": { 00:25:43.543 "start": 0, 00:25:43.543 "length": 16384 00:25:43.543 }, 00:25:43.543 "queue_depth": 128, 00:25:43.543 "io_size": 4096, 00:25:43.543 "runtime": 28.588005, 00:25:43.543 "iops": 10739.854005202531, 00:25:43.543 "mibps": 41.95255470782239, 00:25:43.543 "io_failed": 0, 00:25:43.543 "io_timeout": 0, 00:25:43.543 "avg_latency_us": 11898.598908855332, 00:25:43.543 "min_latency_us": 854.3085714285714, 00:25:43.543 "max_latency_us": 3019898.88 00:25:43.543 } 00:25:43.543 ], 00:25:43.543 "core_count": 1 00:25:43.543 } 00:25:43.803 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3855899 00:25:43.803 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:43.803 [2024-11-26 19:26:36.463970] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:25:43.803 [2024-11-26 19:26:36.464019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855899 ] 00:25:43.803 [2024-11-26 19:26:36.536700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.803 [2024-11-26 19:26:36.577679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.803 Running I/O for 90 seconds... 00:25:43.803 11619.00 IOPS, 45.39 MiB/s [2024-11-26T18:27:06.917Z] 11682.50 IOPS, 45.63 MiB/s [2024-11-26T18:27:06.917Z] 11572.00 IOPS, 45.20 MiB/s [2024-11-26T18:27:06.917Z] 11621.25 IOPS, 45.40 MiB/s [2024-11-26T18:27:06.917Z] 11652.40 IOPS, 45.52 MiB/s [2024-11-26T18:27:06.917Z] 11690.67 IOPS, 45.67 MiB/s [2024-11-26T18:27:06.917Z] 11685.14 IOPS, 45.65 MiB/s [2024-11-26T18:27:06.917Z] 11677.25 IOPS, 45.61 MiB/s [2024-11-26T18:27:06.917Z] 11687.67 IOPS, 45.65 MiB/s [2024-11-26T18:27:06.917Z] 11662.20 IOPS, 45.56 MiB/s [2024-11-26T18:27:06.917Z] 11671.45 IOPS, 45.59 MiB/s [2024-11-26T18:27:06.917Z] 11667.00 IOPS, 45.57 MiB/s [2024-11-26T18:27:06.917Z] [2024-11-26 19:26:50.615235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.803 [2024-11-26 19:26:50.615275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:43.803 [2024-11-26 19:26:50.615725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.803 [2024-11-26 19:26:50.615732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.615983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.615996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:43.804 [2024-11-26 19:26:50.616844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.804 [2024-11-26 19:26:50.616851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.616867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.616874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.616890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.616897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.616912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.616919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.616935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.616942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.616958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.616965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.616981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.616988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.617941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.617994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.805 [2024-11-26 19:26:50.618002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:43.805 [2024-11-26 19:26:50.618020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.806 [2024-11-26 19:26:50.618269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.806 [2024-11-26 19:26:50.618293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.806 [2024-11-26 19:26:50.618318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.806 [2024-11-26 19:26:50.618342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.806 [2024-11-26 19:26:50.618365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.806 [2024-11-26 19:26:50.618389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.806 [2024-11-26 19:26:50.618412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:26:50.618644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:26:50.618651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:43.806 11415.23 IOPS, 44.59 MiB/s [2024-11-26T18:27:06.920Z] 10599.86 IOPS, 41.41 MiB/s [2024-11-26T18:27:06.920Z] 9893.20 IOPS, 38.65 MiB/s [2024-11-26T18:27:06.920Z] 9463.00 IOPS, 36.96 MiB/s [2024-11-26T18:27:06.920Z] 9592.82 IOPS, 37.47 MiB/s [2024-11-26T18:27:06.920Z] 9694.22 IOPS, 37.87 MiB/s [2024-11-26T18:27:06.920Z] 9876.26 IOPS, 38.58 MiB/s [2024-11-26T18:27:06.920Z] 10053.45 IOPS, 39.27 MiB/s [2024-11-26T18:27:06.920Z] 10210.81 IOPS, 39.89 MiB/s [2024-11-26T18:27:06.920Z] 10261.23 IOPS, 40.08 MiB/s [2024-11-26T18:27:06.920Z] 10312.35 IOPS, 40.28 MiB/s [2024-11-26T18:27:06.920Z] 10385.25 IOPS, 40.57 MiB/s [2024-11-26T18:27:06.920Z] 10532.12 IOPS, 41.14 MiB/s [2024-11-26T18:27:06.920Z] 10643.27 IOPS, 41.58 MiB/s [2024-11-26T18:27:06.920Z] [2024-11-26 19:27:04.144239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.806 [2024-11-26 19:27:04.144510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.806 [2024-11-26 19:27:04.144523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.144529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.144549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.144568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.144586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.144605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.144626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.144645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.144822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.144844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.144863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.144882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.144901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.144913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.144920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.145333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.145355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.145375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.145394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.145416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.145434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.145453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.807 [2024-11-26 19:27:04.145473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.145492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.145512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.145531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.145550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.145568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.145587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.807 [2024-11-26 19:27:04.145605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:43.807 [2024-11-26 19:27:04.145617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.145849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.145856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.808 [2024-11-26 19:27:04.146085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:43.808 [2024-11-26 19:27:04.146307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.808 [2024-11-26 19:27:04.146314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:43.808 10698.44 IOPS, 41.79 MiB/s [2024-11-26T18:27:06.922Z] 10726.68 IOPS, 41.90 MiB/s [2024-11-26T18:27:06.922Z] Received shutdown signal, test time was about 28.588640 seconds 00:25:43.808 00:25:43.808 Latency(us) 00:25:43.808 [2024-11-26T18:27:06.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.808 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:43.808 Verification LBA range: start 0x0 length 0x4000 00:25:43.808 Nvme0n1 : 28.59 10739.85 41.95 0.00 0.00 11898.60 854.31 3019898.88 00:25:43.808 [2024-11-26T18:27:06.922Z] =================================================================================================================== 00:25:43.808 [2024-11-26T18:27:06.922Z] Total : 10739.85 41.95 0.00 0.00 11898.60 854.31 3019898.88 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.808 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.808 rmmod nvme_tcp 00:25:43.808 rmmod nvme_fabrics 00:25:43.808 rmmod nvme_keyring 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3855645 ']' 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3855645 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3855645 ']' 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3855645 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3855645 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3855645' 00:25:44.067 killing process with pid 3855645 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3855645 00:25:44.067 19:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3855645 00:25:44.067 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.067 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.067 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.067 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:44.067 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:44.067 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.067 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.326 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.326 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.326 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.326 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.326 19:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.238 19:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.238 00:25:46.238 real 0m40.450s 00:25:46.238 user 1m49.490s 00:25:46.238 sys 0m11.475s 00:25:46.238 19:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.238 19:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.238 ************************************ 00:25:46.238 END TEST nvmf_host_multipath_status 00:25:46.238 ************************************ 00:25:46.238 19:27:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:46.238 19:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:46.238 19:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.238 19:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.238 ************************************ 00:25:46.238 START TEST nvmf_discovery_remove_ifc 00:25:46.238 ************************************ 00:25:46.239 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:46.500 * Looking for test storage... 00:25:46.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.500 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:46.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.501 --rc genhtml_branch_coverage=1 00:25:46.501 --rc genhtml_function_coverage=1 00:25:46.501 --rc genhtml_legend=1 00:25:46.501 --rc geninfo_all_blocks=1 00:25:46.501 --rc geninfo_unexecuted_blocks=1 00:25:46.501 00:25:46.501 ' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:46.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.501 --rc genhtml_branch_coverage=1 00:25:46.501 --rc genhtml_function_coverage=1 00:25:46.501 --rc genhtml_legend=1 00:25:46.501 --rc geninfo_all_blocks=1 00:25:46.501 --rc geninfo_unexecuted_blocks=1 00:25:46.501 00:25:46.501 ' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:46.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.501 --rc genhtml_branch_coverage=1 00:25:46.501 --rc genhtml_function_coverage=1 00:25:46.501 --rc genhtml_legend=1 00:25:46.501 --rc geninfo_all_blocks=1 00:25:46.501 --rc geninfo_unexecuted_blocks=1 00:25:46.501 00:25:46.501 ' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:46.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.501 --rc genhtml_branch_coverage=1 00:25:46.501 --rc genhtml_function_coverage=1 00:25:46.501 --rc genhtml_legend=1 00:25:46.501 --rc geninfo_all_blocks=1 00:25:46.501 --rc geninfo_unexecuted_blocks=1 00:25:46.501 00:25:46.501 ' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.501 19:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.152 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.152 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.152 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.152 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.152 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:53.153 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:53.153 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:53.153 Found net devices under 0000:86:00.0: cvl_0_0 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:53.153 Found net devices under 0000:86:00.1: cvl_0_1 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:25:53.153 00:25:53.153 --- 10.0.0.2 ping statistics --- 00:25:53.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.153 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:25:53.153 00:25:53.153 --- 10.0.0.1 ping statistics --- 00:25:53.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.153 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:53.153 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3864950 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3864950 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3864950 ']' 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.154 [2024-11-26 19:27:15.474664] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:25:53.154 [2024-11-26 19:27:15.474726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.154 [2024-11-26 19:27:15.551382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.154 [2024-11-26 19:27:15.591772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.154 [2024-11-26 19:27:15.591809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.154 [2024-11-26 19:27:15.591816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.154 [2024-11-26 19:27:15.591822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.154 [2024-11-26 19:27:15.591827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.154 [2024-11-26 19:27:15.592389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.154 [2024-11-26 19:27:15.748038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.154 [2024-11-26 19:27:15.756224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:53.154 null0 00:25:53.154 [2024-11-26 19:27:15.788197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3865092 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3865092 /tmp/host.sock 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3865092 ']' 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:53.154 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.154 19:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.154 [2024-11-26 19:27:15.858366] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:25:53.154 [2024-11-26 19:27:15.858409] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865092 ] 00:25:53.154 [2024-11-26 19:27:15.931368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.154 [2024-11-26 19:27:15.973722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.154 19:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.129 [2024-11-26 19:27:17.114177] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.129 [2024-11-26 19:27:17.114196] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.129 [2024-11-26 19:27:17.114211] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.129 [2024-11-26 19:27:17.240607] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:54.388 [2024-11-26 19:27:17.456716] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:54.388 [2024-11-26 19:27:17.457487] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x847a50:1 started. 00:25:54.388 [2024-11-26 19:27:17.458795] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:54.388 [2024-11-26 19:27:17.458834] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:54.388 [2024-11-26 19:27:17.458853] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:54.388 [2024-11-26 19:27:17.458865] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:54.388 [2024-11-26 19:27:17.458882] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.388 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.646 [2024-11-26 19:27:17.504147] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x847a50 was disconnected and freed. delete nvme_qpair. 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:54.646 19:27:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.582 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.582 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.582 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.582 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.582 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.582 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.582 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.582 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.841 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.841 19:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.775 19:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:57.712 19:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.090 19:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.026 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.026 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.026 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.026 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.026 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.026 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.026 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.026 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.026 [2024-11-26 19:27:22.900422] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:00.026 [2024-11-26 19:27:22.900464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.026 [2024-11-26 19:27:22.900475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.026 [2024-11-26 19:27:22.900490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.026 [2024-11-26 19:27:22.900496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.026 [2024-11-26 19:27:22.900503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.026 [2024-11-26 19:27:22.900510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.026 [2024-11-26 19:27:22.900517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.026 [2024-11-26 19:27:22.900524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.026 [2024-11-26 19:27:22.900532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.026 [2024-11-26 19:27:22.900539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.026 [2024-11-26 19:27:22.900545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824240 is same with the state(6) to be set 00:26:00.027 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.027 19:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.027 [2024-11-26 19:27:22.910445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824240 (9): Bad file descriptor 00:26:00.027 [2024-11-26 19:27:22.920478] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:00.027 [2024-11-26 19:27:22.920490] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:00.027 [2024-11-26 19:27:22.920494] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:00.027 [2024-11-26 19:27:22.920498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:00.027 [2024-11-26 19:27:22.920518] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.964 [2024-11-26 19:27:23.924732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:00.964 [2024-11-26 19:27:23.924807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824240 with addr=10.0.0.2, port=4420 00:26:00.964 [2024-11-26 19:27:23.924837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824240 is same with the state(6) to be set 00:26:00.964 [2024-11-26 19:27:23.924889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824240 (9): Bad file descriptor 00:26:00.964 [2024-11-26 19:27:23.925835] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:00.964 [2024-11-26 19:27:23.925906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:00.964 [2024-11-26 19:27:23.925931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:00.964 [2024-11-26 19:27:23.925954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:00.964 [2024-11-26 19:27:23.925973] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:00.964 [2024-11-26 19:27:23.925989] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:00.964 [2024-11-26 19:27:23.926002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:00.964 [2024-11-26 19:27:23.926024] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:00.964 [2024-11-26 19:27:23.926039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.964 19:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.901 [2024-11-26 19:27:24.928552] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:01.901 [2024-11-26 19:27:24.928570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:01.901 [2024-11-26 19:27:24.928581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:01.901 [2024-11-26 19:27:24.928587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:01.901 [2024-11-26 19:27:24.928594] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:01.901 [2024-11-26 19:27:24.928600] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:01.901 [2024-11-26 19:27:24.928604] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:01.901 [2024-11-26 19:27:24.928608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:01.901 [2024-11-26 19:27:24.928626] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:01.901 [2024-11-26 19:27:24.928646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.901 [2024-11-26 19:27:24.928655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.901 [2024-11-26 19:27:24.928664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.901 [2024-11-26 19:27:24.928675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.901 [2024-11-26 19:27:24.928682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.902 [2024-11-26 19:27:24.928689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.902 [2024-11-26 19:27:24.928695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.902 [2024-11-26 19:27:24.928701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.902 [2024-11-26 19:27:24.928708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.902 [2024-11-26 19:27:24.928718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.902 [2024-11-26 19:27:24.928724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:01.902 [2024-11-26 19:27:24.929119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x813910 (9): Bad file descriptor 00:26:01.902 [2024-11-26 19:27:24.930130] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:01.902 [2024-11-26 19:27:24.930141] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:01.902 19:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.902 19:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.902 19:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.902 19:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.902 19:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.902 19:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.902 19:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.902 19:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.902 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:01.902 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:02.161 19:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:03.097 19:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.032 [2024-11-26 19:27:26.979803] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:04.032 [2024-11-26 19:27:26.979819] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:04.032 [2024-11-26 19:27:26.979832] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:04.032 [2024-11-26 19:27:27.066100] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.291 [2024-11-26 19:27:27.240965] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:04.291 [2024-11-26 19:27:27.241546] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x826700:1 started. 00:26:04.291 [2024-11-26 19:27:27.242561] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:04.291 [2024-11-26 19:27:27.242595] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:04.291 [2024-11-26 19:27:27.242614] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:04.291 [2024-11-26 19:27:27.242627] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:04.291 [2024-11-26 19:27:27.242635] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:04.291 19:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.291 [2024-11-26 19:27:27.248738] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x826700 was disconnected and freed. delete nvme_qpair. 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3865092 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3865092 ']' 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3865092 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.226 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3865092 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3865092' 00:26:05.486 killing process with pid 3865092 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3865092 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3865092 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:05.486 rmmod nvme_tcp 00:26:05.486 rmmod nvme_fabrics 00:26:05.486 rmmod nvme_keyring 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3864950 ']' 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3864950 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3864950 ']' 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3864950 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.486 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3864950 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3864950' 00:26:05.746 killing process with pid 3864950 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3864950 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3864950 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.746 19:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.284 19:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:08.284 00:26:08.284 real 0m21.542s 00:26:08.284 user 0m26.880s 00:26:08.284 sys 0m5.887s 00:26:08.284 19:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.284 19:27:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.284 ************************************ 00:26:08.284 END TEST nvmf_discovery_remove_ifc 00:26:08.284 ************************************ 00:26:08.284 19:27:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:08.284 19:27:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:08.284 19:27:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.284 19:27:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.284 ************************************ 00:26:08.284 START TEST nvmf_identify_kernel_target 00:26:08.284 ************************************ 00:26:08.284 19:27:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:08.284 * Looking for test storage... 00:26:08.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:08.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.284 --rc genhtml_branch_coverage=1 00:26:08.284 --rc genhtml_function_coverage=1 00:26:08.284 --rc genhtml_legend=1 00:26:08.284 --rc geninfo_all_blocks=1 00:26:08.284 --rc geninfo_unexecuted_blocks=1 00:26:08.284 00:26:08.284 ' 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:08.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.284 --rc genhtml_branch_coverage=1 00:26:08.284 --rc genhtml_function_coverage=1 00:26:08.284 --rc genhtml_legend=1 00:26:08.284 --rc geninfo_all_blocks=1 00:26:08.284 --rc geninfo_unexecuted_blocks=1 00:26:08.284 00:26:08.284 ' 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:08.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.284 --rc genhtml_branch_coverage=1 00:26:08.284 --rc genhtml_function_coverage=1 00:26:08.284 --rc genhtml_legend=1 00:26:08.284 --rc geninfo_all_blocks=1 00:26:08.284 --rc geninfo_unexecuted_blocks=1 00:26:08.284 00:26:08.284 ' 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:08.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.284 --rc genhtml_branch_coverage=1 00:26:08.284 --rc genhtml_function_coverage=1 00:26:08.284 --rc genhtml_legend=1 00:26:08.284 --rc geninfo_all_blocks=1 00:26:08.284 --rc geninfo_unexecuted_blocks=1 00:26:08.284 00:26:08.284 ' 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.284 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:08.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:08.285 19:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.853 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:14.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:14.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:14.854 Found net devices under 0000:86:00.0: cvl_0_0 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:14.854 Found net devices under 0000:86:00.1: cvl_0_1 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:14.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:26:14.854 00:26:14.854 --- 10.0.0.2 ping statistics --- 00:26:14.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.854 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:26:14.854 00:26:14.854 --- 10.0.0.1 ping statistics --- 00:26:14.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.854 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:14.854 19:27:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:14.854 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:14.854 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:14.854 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:14.854 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.854 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.854 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.854 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:14.855 19:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:16.763 Waiting for block devices as requested 00:26:16.763 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:17.022 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:17.022 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:17.022 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:17.282 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:17.282 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:17.282 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:17.282 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:17.541 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:17.541 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:17.541 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:17.831 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:17.831 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:17.831 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:17.831 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:18.090 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:18.090 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:18.090 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:18.349 No valid GPT data, bailing 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:18.349 00:26:18.349 Discovery Log Number of Records 2, Generation counter 2 00:26:18.349 =====Discovery Log Entry 0====== 00:26:18.349 trtype: tcp 00:26:18.349 adrfam: ipv4 00:26:18.349 subtype: current discovery subsystem 00:26:18.349 treq: not specified, sq flow control disable supported 00:26:18.349 portid: 1 00:26:18.349 trsvcid: 4420 00:26:18.349 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:18.349 traddr: 10.0.0.1 00:26:18.349 eflags: none 00:26:18.349 sectype: none 00:26:18.349 =====Discovery Log Entry 1====== 00:26:18.349 trtype: tcp 00:26:18.349 adrfam: ipv4 00:26:18.349 subtype: nvme subsystem 00:26:18.349 treq: not specified, sq flow control disable supported 00:26:18.349 portid: 1 00:26:18.349 trsvcid: 4420 00:26:18.349 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:18.349 traddr: 10.0.0.1 00:26:18.349 eflags: none 00:26:18.349 sectype: none 00:26:18.349 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:18.349 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:18.349 ===================================================== 00:26:18.349 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:18.349 ===================================================== 00:26:18.349 Controller Capabilities/Features 00:26:18.349 ================================ 00:26:18.349 Vendor ID: 0000 00:26:18.349 Subsystem Vendor ID: 0000 00:26:18.349 Serial Number: b776b70b3247ca3221d8 00:26:18.349 Model Number: Linux 00:26:18.349 Firmware Version: 6.8.9-20 00:26:18.349 Recommended Arb Burst: 0 00:26:18.349 IEEE OUI Identifier: 00 00 00 00:26:18.349 Multi-path I/O 00:26:18.349 May have multiple subsystem ports: No 00:26:18.349 May have multiple controllers: No 00:26:18.349 Associated with SR-IOV VF: No 00:26:18.349 Max Data Transfer Size: Unlimited 00:26:18.349 Max Number of Namespaces: 0 00:26:18.349 Max Number of I/O Queues: 1024 00:26:18.349 NVMe Specification Version (VS): 1.3 00:26:18.349 NVMe Specification Version (Identify): 1.3 00:26:18.349 Maximum Queue Entries: 1024 00:26:18.349 Contiguous Queues Required: No 00:26:18.349 Arbitration Mechanisms Supported 00:26:18.349 Weighted Round Robin: Not Supported 00:26:18.349 Vendor Specific: Not Supported 00:26:18.349 Reset Timeout: 7500 ms 00:26:18.349 Doorbell Stride: 4 bytes 00:26:18.349 NVM Subsystem Reset: Not Supported 00:26:18.349 Command Sets Supported 00:26:18.349 NVM Command Set: Supported 00:26:18.349 Boot Partition: Not Supported 00:26:18.349 Memory Page Size Minimum: 4096 bytes 00:26:18.349 Memory Page Size Maximum: 4096 bytes 00:26:18.349 Persistent Memory Region: Not Supported 00:26:18.349 Optional Asynchronous Events Supported 00:26:18.349 Namespace Attribute Notices: Not Supported 00:26:18.349 Firmware Activation Notices: Not Supported 00:26:18.349 ANA Change Notices: Not Supported 00:26:18.349 PLE Aggregate Log Change Notices: Not Supported 00:26:18.349 LBA Status Info Alert Notices: Not Supported 00:26:18.349 EGE Aggregate Log Change Notices: Not Supported 00:26:18.349 Normal NVM Subsystem Shutdown event: Not Supported 00:26:18.349 Zone Descriptor Change Notices: Not Supported 00:26:18.349 Discovery Log Change Notices: Supported 00:26:18.349 Controller Attributes 00:26:18.349 128-bit Host Identifier: Not Supported 00:26:18.349 Non-Operational Permissive Mode: Not Supported 00:26:18.349 NVM Sets: Not Supported 00:26:18.349 Read Recovery Levels: Not Supported 00:26:18.349 Endurance Groups: Not Supported 00:26:18.349 Predictable Latency Mode: Not Supported 00:26:18.349 Traffic Based Keep ALive: Not Supported 00:26:18.349 Namespace Granularity: Not Supported 00:26:18.349 SQ Associations: Not Supported 00:26:18.349 UUID List: Not Supported 00:26:18.349 Multi-Domain Subsystem: Not Supported 00:26:18.349 Fixed Capacity Management: Not Supported 00:26:18.349 Variable Capacity Management: Not Supported 00:26:18.349 Delete Endurance Group: Not Supported 00:26:18.349 Delete NVM Set: Not Supported 00:26:18.349 Extended LBA Formats Supported: Not Supported 00:26:18.349 Flexible Data Placement Supported: Not Supported 00:26:18.349 00:26:18.349 Controller Memory Buffer Support 00:26:18.349 ================================ 00:26:18.349 Supported: No 00:26:18.349 00:26:18.349 Persistent Memory Region Support 00:26:18.349 ================================ 00:26:18.350 Supported: No 00:26:18.350 00:26:18.350 Admin Command Set Attributes 00:26:18.350 ============================ 00:26:18.350 Security Send/Receive: Not Supported 00:26:18.350 Format NVM: Not Supported 00:26:18.350 Firmware Activate/Download: Not Supported 00:26:18.350 Namespace Management: Not Supported 00:26:18.350 Device Self-Test: Not Supported 00:26:18.350 Directives: Not Supported 00:26:18.350 NVMe-MI: Not Supported 00:26:18.350 Virtualization Management: Not Supported 00:26:18.350 Doorbell Buffer Config: Not Supported 00:26:18.350 Get LBA Status Capability: Not Supported 00:26:18.350 Command & Feature Lockdown Capability: Not Supported 00:26:18.350 Abort Command Limit: 1 00:26:18.350 Async Event Request Limit: 1 00:26:18.350 Number of Firmware Slots: N/A 00:26:18.350 Firmware Slot 1 Read-Only: N/A 00:26:18.350 Firmware Activation Without Reset: N/A 00:26:18.350 Multiple Update Detection Support: N/A 00:26:18.350 Firmware Update Granularity: No Information Provided 00:26:18.350 Per-Namespace SMART Log: No 00:26:18.350 Asymmetric Namespace Access Log Page: Not Supported 00:26:18.350 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:18.350 Command Effects Log Page: Not Supported 00:26:18.350 Get Log Page Extended Data: Supported 00:26:18.350 Telemetry Log Pages: Not Supported 00:26:18.350 Persistent Event Log Pages: Not Supported 00:26:18.350 Supported Log Pages Log Page: May Support 00:26:18.350 Commands Supported & Effects Log Page: Not Supported 00:26:18.350 Feature Identifiers & Effects Log Page:May Support 00:26:18.350 NVMe-MI Commands & Effects Log Page: May Support 00:26:18.350 Data Area 4 for Telemetry Log: Not Supported 00:26:18.350 Error Log Page Entries Supported: 1 00:26:18.350 Keep Alive: Not Supported 00:26:18.350 00:26:18.350 NVM Command Set Attributes 00:26:18.350 ========================== 00:26:18.350 Submission Queue Entry Size 00:26:18.350 Max: 1 00:26:18.350 Min: 1 00:26:18.350 Completion Queue Entry Size 00:26:18.350 Max: 1 00:26:18.350 Min: 1 00:26:18.350 Number of Namespaces: 0 00:26:18.350 Compare Command: Not Supported 00:26:18.350 Write Uncorrectable Command: Not Supported 00:26:18.350 Dataset Management Command: Not Supported 00:26:18.350 Write Zeroes Command: Not Supported 00:26:18.350 Set Features Save Field: Not Supported 00:26:18.350 Reservations: Not Supported 00:26:18.350 Timestamp: Not Supported 00:26:18.350 Copy: Not Supported 00:26:18.350 Volatile Write Cache: Not Present 00:26:18.350 Atomic Write Unit (Normal): 1 00:26:18.350 Atomic Write Unit (PFail): 1 00:26:18.350 Atomic Compare & Write Unit: 1 00:26:18.350 Fused Compare & Write: Not Supported 00:26:18.350 Scatter-Gather List 00:26:18.350 SGL Command Set: Supported 00:26:18.350 SGL Keyed: Not Supported 00:26:18.350 SGL Bit Bucket Descriptor: Not Supported 00:26:18.350 SGL Metadata Pointer: Not Supported 00:26:18.350 Oversized SGL: Not Supported 00:26:18.350 SGL Metadata Address: Not Supported 00:26:18.350 SGL Offset: Supported 00:26:18.350 Transport SGL Data Block: Not Supported 00:26:18.350 Replay Protected Memory Block: Not Supported 00:26:18.350 00:26:18.350 Firmware Slot Information 00:26:18.350 ========================= 00:26:18.350 Active slot: 0 00:26:18.350 00:26:18.350 00:26:18.350 Error Log 00:26:18.350 ========= 00:26:18.350 00:26:18.350 Active Namespaces 00:26:18.350 ================= 00:26:18.350 Discovery Log Page 00:26:18.350 ================== 00:26:18.350 Generation Counter: 2 00:26:18.350 Number of Records: 2 00:26:18.350 Record Format: 0 00:26:18.350 00:26:18.350 Discovery Log Entry 0 00:26:18.350 ---------------------- 00:26:18.350 Transport Type: 3 (TCP) 00:26:18.350 Address Family: 1 (IPv4) 00:26:18.350 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:18.350 Entry Flags: 00:26:18.350 Duplicate Returned Information: 0 00:26:18.350 Explicit Persistent Connection Support for Discovery: 0 00:26:18.350 Transport Requirements: 00:26:18.350 Secure Channel: Not Specified 00:26:18.350 Port ID: 1 (0x0001) 00:26:18.350 Controller ID: 65535 (0xffff) 00:26:18.350 Admin Max SQ Size: 32 00:26:18.350 Transport Service Identifier: 4420 00:26:18.350 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:18.350 Transport Address: 10.0.0.1 00:26:18.350 Discovery Log Entry 1 00:26:18.350 ---------------------- 00:26:18.350 Transport Type: 3 (TCP) 00:26:18.350 Address Family: 1 (IPv4) 00:26:18.350 Subsystem Type: 2 (NVM Subsystem) 00:26:18.350 Entry Flags: 00:26:18.350 Duplicate Returned Information: 0 00:26:18.350 Explicit Persistent Connection Support for Discovery: 0 00:26:18.350 Transport Requirements: 00:26:18.350 Secure Channel: Not Specified 00:26:18.350 Port ID: 1 (0x0001) 00:26:18.350 Controller ID: 65535 (0xffff) 00:26:18.350 Admin Max SQ Size: 32 00:26:18.350 Transport Service Identifier: 4420 00:26:18.350 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:18.350 Transport Address: 10.0.0.1 00:26:18.350 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:18.611 get_feature(0x01) failed 00:26:18.611 get_feature(0x02) failed 00:26:18.611 get_feature(0x04) failed 00:26:18.611 ===================================================== 00:26:18.611 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:18.611 ===================================================== 00:26:18.611 Controller Capabilities/Features 00:26:18.611 ================================ 00:26:18.611 Vendor ID: 0000 00:26:18.611 Subsystem Vendor ID: 0000 00:26:18.611 Serial Number: 0f35c050631503a2d1e7 00:26:18.611 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:18.611 Firmware Version: 6.8.9-20 00:26:18.611 Recommended Arb Burst: 6 00:26:18.611 IEEE OUI Identifier: 00 00 00 00:26:18.611 Multi-path I/O 00:26:18.611 May have multiple subsystem ports: Yes 00:26:18.611 May have multiple controllers: Yes 00:26:18.611 Associated with SR-IOV VF: No 00:26:18.611 Max Data Transfer Size: Unlimited 00:26:18.611 Max Number of Namespaces: 1024 00:26:18.611 Max Number of I/O Queues: 128 00:26:18.611 NVMe Specification Version (VS): 1.3 00:26:18.611 NVMe Specification Version (Identify): 1.3 00:26:18.611 Maximum Queue Entries: 1024 00:26:18.611 Contiguous Queues Required: No 00:26:18.611 Arbitration Mechanisms Supported 00:26:18.611 Weighted Round Robin: Not Supported 00:26:18.611 Vendor Specific: Not Supported 00:26:18.611 Reset Timeout: 7500 ms 00:26:18.611 Doorbell Stride: 4 bytes 00:26:18.611 NVM Subsystem Reset: Not Supported 00:26:18.611 Command Sets Supported 00:26:18.611 NVM Command Set: Supported 00:26:18.611 Boot Partition: Not Supported 00:26:18.611 Memory Page Size Minimum: 4096 bytes 00:26:18.611 Memory Page Size Maximum: 4096 bytes 00:26:18.611 Persistent Memory Region: Not Supported 00:26:18.611 Optional Asynchronous Events Supported 00:26:18.611 Namespace Attribute Notices: Supported 00:26:18.611 Firmware Activation Notices: Not Supported 00:26:18.611 ANA Change Notices: Supported 00:26:18.611 PLE Aggregate Log Change Notices: Not Supported 00:26:18.611 LBA Status Info Alert Notices: Not Supported 00:26:18.611 EGE Aggregate Log Change Notices: Not Supported 00:26:18.611 Normal NVM Subsystem Shutdown event: Not Supported 00:26:18.611 Zone Descriptor Change Notices: Not Supported 00:26:18.611 Discovery Log Change Notices: Not Supported 00:26:18.611 Controller Attributes 00:26:18.611 128-bit Host Identifier: Supported 00:26:18.611 Non-Operational Permissive Mode: Not Supported 00:26:18.611 NVM Sets: Not Supported 00:26:18.611 Read Recovery Levels: Not Supported 00:26:18.611 Endurance Groups: Not Supported 00:26:18.611 Predictable Latency Mode: Not Supported 00:26:18.611 Traffic Based Keep ALive: Supported 00:26:18.611 Namespace Granularity: Not Supported 00:26:18.611 SQ Associations: Not Supported 00:26:18.611 UUID List: Not Supported 00:26:18.611 Multi-Domain Subsystem: Not Supported 00:26:18.611 Fixed Capacity Management: Not Supported 00:26:18.611 Variable Capacity Management: Not Supported 00:26:18.611 Delete Endurance Group: Not Supported 00:26:18.611 Delete NVM Set: Not Supported 00:26:18.611 Extended LBA Formats Supported: Not Supported 00:26:18.611 Flexible Data Placement Supported: Not Supported 00:26:18.611 00:26:18.611 Controller Memory Buffer Support 00:26:18.611 ================================ 00:26:18.611 Supported: No 00:26:18.611 00:26:18.611 Persistent Memory Region Support 00:26:18.611 ================================ 00:26:18.611 Supported: No 00:26:18.611 00:26:18.611 Admin Command Set Attributes 00:26:18.611 ============================ 00:26:18.611 Security Send/Receive: Not Supported 00:26:18.611 Format NVM: Not Supported 00:26:18.611 Firmware Activate/Download: Not Supported 00:26:18.611 Namespace Management: Not Supported 00:26:18.611 Device Self-Test: Not Supported 00:26:18.611 Directives: Not Supported 00:26:18.611 NVMe-MI: Not Supported 00:26:18.611 Virtualization Management: Not Supported 00:26:18.611 Doorbell Buffer Config: Not Supported 00:26:18.611 Get LBA Status Capability: Not Supported 00:26:18.611 Command & Feature Lockdown Capability: Not Supported 00:26:18.611 Abort Command Limit: 4 00:26:18.611 Async Event Request Limit: 4 00:26:18.611 Number of Firmware Slots: N/A 00:26:18.611 Firmware Slot 1 Read-Only: N/A 00:26:18.611 Firmware Activation Without Reset: N/A 00:26:18.611 Multiple Update Detection Support: N/A 00:26:18.611 Firmware Update Granularity: No Information Provided 00:26:18.611 Per-Namespace SMART Log: Yes 00:26:18.611 Asymmetric Namespace Access Log Page: Supported 00:26:18.611 ANA Transition Time : 10 sec 00:26:18.611 00:26:18.611 Asymmetric Namespace Access Capabilities 00:26:18.611 ANA Optimized State : Supported 00:26:18.611 ANA Non-Optimized State : Supported 00:26:18.611 ANA Inaccessible State : Supported 00:26:18.611 ANA Persistent Loss State : Supported 00:26:18.611 ANA Change State : Supported 00:26:18.611 ANAGRPID is not changed : No 00:26:18.611 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:18.611 00:26:18.611 ANA Group Identifier Maximum : 128 00:26:18.611 Number of ANA Group Identifiers : 128 00:26:18.611 Max Number of Allowed Namespaces : 1024 00:26:18.611 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:18.611 Command Effects Log Page: Supported 00:26:18.611 Get Log Page Extended Data: Supported 00:26:18.611 Telemetry Log Pages: Not Supported 00:26:18.611 Persistent Event Log Pages: Not Supported 00:26:18.611 Supported Log Pages Log Page: May Support 00:26:18.611 Commands Supported & Effects Log Page: Not Supported 00:26:18.611 Feature Identifiers & Effects Log Page:May Support 00:26:18.611 NVMe-MI Commands & Effects Log Page: May Support 00:26:18.611 Data Area 4 for Telemetry Log: Not Supported 00:26:18.611 Error Log Page Entries Supported: 128 00:26:18.611 Keep Alive: Supported 00:26:18.611 Keep Alive Granularity: 1000 ms 00:26:18.611 00:26:18.611 NVM Command Set Attributes 00:26:18.611 ========================== 00:26:18.611 Submission Queue Entry Size 00:26:18.611 Max: 64 00:26:18.611 Min: 64 00:26:18.611 Completion Queue Entry Size 00:26:18.611 Max: 16 00:26:18.611 Min: 16 00:26:18.611 Number of Namespaces: 1024 00:26:18.611 Compare Command: Not Supported 00:26:18.611 Write Uncorrectable Command: Not Supported 00:26:18.611 Dataset Management Command: Supported 00:26:18.611 Write Zeroes Command: Supported 00:26:18.611 Set Features Save Field: Not Supported 00:26:18.611 Reservations: Not Supported 00:26:18.611 Timestamp: Not Supported 00:26:18.611 Copy: Not Supported 00:26:18.611 Volatile Write Cache: Present 00:26:18.611 Atomic Write Unit (Normal): 1 00:26:18.611 Atomic Write Unit (PFail): 1 00:26:18.611 Atomic Compare & Write Unit: 1 00:26:18.611 Fused Compare & Write: Not Supported 00:26:18.612 Scatter-Gather List 00:26:18.612 SGL Command Set: Supported 00:26:18.612 SGL Keyed: Not Supported 00:26:18.612 SGL Bit Bucket Descriptor: Not Supported 00:26:18.612 SGL Metadata Pointer: Not Supported 00:26:18.612 Oversized SGL: Not Supported 00:26:18.612 SGL Metadata Address: Not Supported 00:26:18.612 SGL Offset: Supported 00:26:18.612 Transport SGL Data Block: Not Supported 00:26:18.612 Replay Protected Memory Block: Not Supported 00:26:18.612 00:26:18.612 Firmware Slot Information 00:26:18.612 ========================= 00:26:18.612 Active slot: 0 00:26:18.612 00:26:18.612 Asymmetric Namespace Access 00:26:18.612 =========================== 00:26:18.612 Change Count : 0 00:26:18.612 Number of ANA Group Descriptors : 1 00:26:18.612 ANA Group Descriptor : 0 00:26:18.612 ANA Group ID : 1 00:26:18.612 Number of NSID Values : 1 00:26:18.612 Change Count : 0 00:26:18.612 ANA State : 1 00:26:18.612 Namespace Identifier : 1 00:26:18.612 00:26:18.612 Commands Supported and Effects 00:26:18.612 ============================== 00:26:18.612 Admin Commands 00:26:18.612 -------------- 00:26:18.612 Get Log Page (02h): Supported 00:26:18.612 Identify (06h): Supported 00:26:18.612 Abort (08h): Supported 00:26:18.612 Set Features (09h): Supported 00:26:18.612 Get Features (0Ah): Supported 00:26:18.612 Asynchronous Event Request (0Ch): Supported 00:26:18.612 Keep Alive (18h): Supported 00:26:18.612 I/O Commands 00:26:18.612 ------------ 00:26:18.612 Flush (00h): Supported 00:26:18.612 Write (01h): Supported LBA-Change 00:26:18.612 Read (02h): Supported 00:26:18.612 Write Zeroes (08h): Supported LBA-Change 00:26:18.612 Dataset Management (09h): Supported 00:26:18.612 00:26:18.612 Error Log 00:26:18.612 ========= 00:26:18.612 Entry: 0 00:26:18.612 Error Count: 0x3 00:26:18.612 Submission Queue Id: 0x0 00:26:18.612 Command Id: 0x5 00:26:18.612 Phase Bit: 0 00:26:18.612 Status Code: 0x2 00:26:18.612 Status Code Type: 0x0 00:26:18.612 Do Not Retry: 1 00:26:18.612 Error Location: 0x28 00:26:18.612 LBA: 0x0 00:26:18.612 Namespace: 0x0 00:26:18.612 Vendor Log Page: 0x0 00:26:18.612 ----------- 00:26:18.612 Entry: 1 00:26:18.612 Error Count: 0x2 00:26:18.612 Submission Queue Id: 0x0 00:26:18.612 Command Id: 0x5 00:26:18.612 Phase Bit: 0 00:26:18.612 Status Code: 0x2 00:26:18.612 Status Code Type: 0x0 00:26:18.612 Do Not Retry: 1 00:26:18.612 Error Location: 0x28 00:26:18.612 LBA: 0x0 00:26:18.612 Namespace: 0x0 00:26:18.612 Vendor Log Page: 0x0 00:26:18.612 ----------- 00:26:18.612 Entry: 2 00:26:18.612 Error Count: 0x1 00:26:18.612 Submission Queue Id: 0x0 00:26:18.612 Command Id: 0x4 00:26:18.612 Phase Bit: 0 00:26:18.612 Status Code: 0x2 00:26:18.612 Status Code Type: 0x0 00:26:18.612 Do Not Retry: 1 00:26:18.612 Error Location: 0x28 00:26:18.612 LBA: 0x0 00:26:18.612 Namespace: 0x0 00:26:18.612 Vendor Log Page: 0x0 00:26:18.612 00:26:18.612 Number of Queues 00:26:18.612 ================ 00:26:18.612 Number of I/O Submission Queues: 128 00:26:18.612 Number of I/O Completion Queues: 128 00:26:18.612 00:26:18.612 ZNS Specific Controller Data 00:26:18.612 ============================ 00:26:18.612 Zone Append Size Limit: 0 00:26:18.612 00:26:18.612 00:26:18.612 Active Namespaces 00:26:18.612 ================= 00:26:18.612 get_feature(0x05) failed 00:26:18.612 Namespace ID:1 00:26:18.612 Command Set Identifier: NVM (00h) 00:26:18.612 Deallocate: Supported 00:26:18.612 Deallocated/Unwritten Error: Not Supported 00:26:18.612 Deallocated Read Value: Unknown 00:26:18.612 Deallocate in Write Zeroes: Not Supported 00:26:18.612 Deallocated Guard Field: 0xFFFF 00:26:18.612 Flush: Supported 00:26:18.612 Reservation: Not Supported 00:26:18.612 Namespace Sharing Capabilities: Multiple Controllers 00:26:18.612 Size (in LBAs): 3125627568 (1490GiB) 00:26:18.612 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:18.612 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:18.612 UUID: 6829e497-71b9-4e4e-b0bf-234da5d8e60d 00:26:18.612 Thin Provisioning: Not Supported 00:26:18.612 Per-NS Atomic Units: Yes 00:26:18.612 Atomic Boundary Size (Normal): 0 00:26:18.612 Atomic Boundary Size (PFail): 0 00:26:18.612 Atomic Boundary Offset: 0 00:26:18.612 NGUID/EUI64 Never Reused: No 00:26:18.612 ANA group ID: 1 00:26:18.612 Namespace Write Protected: No 00:26:18.612 Number of LBA Formats: 1 00:26:18.612 Current LBA Format: LBA Format #00 00:26:18.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:18.612 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:18.612 rmmod nvme_tcp 00:26:18.612 rmmod nvme_fabrics 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.612 19:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:21.151 19:27:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:23.687 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:23.687 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:25.066 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:25.326 00:26:25.326 real 0m17.278s 00:26:25.326 user 0m4.374s 00:26:25.326 sys 0m8.658s 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:25.326 ************************************ 00:26:25.326 END TEST nvmf_identify_kernel_target 00:26:25.326 ************************************ 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.326 ************************************ 00:26:25.326 START TEST nvmf_auth_host 00:26:25.326 ************************************ 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:25.326 * Looking for test storage... 00:26:25.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:25.326 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:25.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.587 --rc genhtml_branch_coverage=1 00:26:25.587 --rc genhtml_function_coverage=1 00:26:25.587 --rc genhtml_legend=1 00:26:25.587 --rc geninfo_all_blocks=1 00:26:25.587 --rc geninfo_unexecuted_blocks=1 00:26:25.587 00:26:25.587 ' 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:25.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.587 --rc genhtml_branch_coverage=1 00:26:25.587 --rc genhtml_function_coverage=1 00:26:25.587 --rc genhtml_legend=1 00:26:25.587 --rc geninfo_all_blocks=1 00:26:25.587 --rc geninfo_unexecuted_blocks=1 00:26:25.587 00:26:25.587 ' 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:25.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.587 --rc genhtml_branch_coverage=1 00:26:25.587 --rc genhtml_function_coverage=1 00:26:25.587 --rc genhtml_legend=1 00:26:25.587 --rc geninfo_all_blocks=1 00:26:25.587 --rc geninfo_unexecuted_blocks=1 00:26:25.587 00:26:25.587 ' 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:25.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.587 --rc genhtml_branch_coverage=1 00:26:25.587 --rc genhtml_function_coverage=1 00:26:25.587 --rc genhtml_legend=1 00:26:25.587 --rc geninfo_all_blocks=1 00:26:25.587 --rc geninfo_unexecuted_blocks=1 00:26:25.587 00:26:25.587 ' 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.587 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.588 19:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.163 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:32.164 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:32.164 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:32.164 Found net devices under 0000:86:00.0: cvl_0_0 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:32.164 Found net devices under 0000:86:00.1: cvl_0_1 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:26:32.164 00:26:32.164 --- 10.0.0.2 ping statistics --- 00:26:32.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.164 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:26:32.164 00:26:32.164 --- 10.0.0.1 ping statistics --- 00:26:32.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.164 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3877190 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3877190 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3877190 ']' 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:32.164 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2845baab8f56e434b94eb1c8514f3d97 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Hmr 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2845baab8f56e434b94eb1c8514f3d97 0 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2845baab8f56e434b94eb1c8514f3d97 0 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2845baab8f56e434b94eb1c8514f3d97 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Hmr 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Hmr 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Hmr 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83e616ecd4833972d2e3a40c1b8afabfbe1ba25930e2b9373151d3b03e4decb8 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2yB 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83e616ecd4833972d2e3a40c1b8afabfbe1ba25930e2b9373151d3b03e4decb8 3 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83e616ecd4833972d2e3a40c1b8afabfbe1ba25930e2b9373151d3b03e4decb8 3 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83e616ecd4833972d2e3a40c1b8afabfbe1ba25930e2b9373151d3b03e4decb8 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2yB 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2yB 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2yB 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=187e0bead8585128ae3f10ae9af749d56349f8360d8fb609 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.t2Z 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 187e0bead8585128ae3f10ae9af749d56349f8360d8fb609 0 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 187e0bead8585128ae3f10ae9af749d56349f8360d8fb609 0 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=187e0bead8585128ae3f10ae9af749d56349f8360d8fb609 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.t2Z 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.t2Z 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.t2Z 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce96c28fe386ee323486dd9d46fad40a519510bf085d8f64 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.q1T 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce96c28fe386ee323486dd9d46fad40a519510bf085d8f64 2 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce96c28fe386ee323486dd9d46fad40a519510bf085d8f64 2 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce96c28fe386ee323486dd9d46fad40a519510bf085d8f64 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.q1T 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.q1T 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.q1T 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=20cc0346c0776cc1af504e17a2abddac 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.p0v 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 20cc0346c0776cc1af504e17a2abddac 1 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 20cc0346c0776cc1af504e17a2abddac 1 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=20cc0346c0776cc1af504e17a2abddac 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:32.165 19:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.p0v 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.p0v 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.p0v 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a5c762cb36ab31bbcdec49af153dd199 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.KmE 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a5c762cb36ab31bbcdec49af153dd199 1 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a5c762cb36ab31bbcdec49af153dd199 1 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a5c762cb36ab31bbcdec49af153dd199 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.KmE 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.KmE 00:26:32.165 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.KmE 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=336ca8e1b8e4f54f43c84db23cd10a3c044f92dc21e30fb2 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.QaF 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 336ca8e1b8e4f54f43c84db23cd10a3c044f92dc21e30fb2 2 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 336ca8e1b8e4f54f43c84db23cd10a3c044f92dc21e30fb2 2 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=336ca8e1b8e4f54f43c84db23cd10a3c044f92dc21e30fb2 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.QaF 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.QaF 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.QaF 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8d19c80f4b5ce3250b41eaafcdad045e 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QoX 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8d19c80f4b5ce3250b41eaafcdad045e 0 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8d19c80f4b5ce3250b41eaafcdad045e 0 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8d19c80f4b5ce3250b41eaafcdad045e 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QoX 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QoX 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.QoX 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=66735b89c192b3634733fe6d8a7aaba86dfe2587ae0ac3c71a219ff5c953e2a7 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JQN 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 66735b89c192b3634733fe6d8a7aaba86dfe2587ae0ac3c71a219ff5c953e2a7 3 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 66735b89c192b3634733fe6d8a7aaba86dfe2587ae0ac3c71a219ff5c953e2a7 3 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=66735b89c192b3634733fe6d8a7aaba86dfe2587ae0ac3c71a219ff5c953e2a7 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JQN 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JQN 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JQN 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3877190 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3877190 ']' 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.166 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Hmr 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2yB ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2yB 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.t2Z 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.q1T ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q1T 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.p0v 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.KmE ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KmE 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.QaF 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.QoX ]] 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.QoX 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.425 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JQN 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.683 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:32.684 19:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:35.219 Waiting for block devices as requested 00:26:35.219 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:35.478 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:35.478 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:35.478 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:35.478 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:35.737 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:35.737 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:35.737 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:35.737 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:35.995 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:35.995 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:35.995 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:36.260 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:36.260 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:36.260 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:36.260 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:36.520 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:37.088 No valid GPT data, bailing 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:37.088 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:37.089 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:37.089 00:26:37.089 Discovery Log Number of Records 2, Generation counter 2 00:26:37.089 =====Discovery Log Entry 0====== 00:26:37.089 trtype: tcp 00:26:37.089 adrfam: ipv4 00:26:37.089 subtype: current discovery subsystem 00:26:37.089 treq: not specified, sq flow control disable supported 00:26:37.089 portid: 1 00:26:37.089 trsvcid: 4420 00:26:37.089 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:37.089 traddr: 10.0.0.1 00:26:37.089 eflags: none 00:26:37.089 sectype: none 00:26:37.089 =====Discovery Log Entry 1====== 00:26:37.089 trtype: tcp 00:26:37.089 adrfam: ipv4 00:26:37.089 subtype: nvme subsystem 00:26:37.089 treq: not specified, sq flow control disable supported 00:26:37.089 portid: 1 00:26:37.089 trsvcid: 4420 00:26:37.089 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:37.089 traddr: 10.0.0.1 00:26:37.089 eflags: none 00:26:37.089 sectype: none 00:26:37.089 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:37.089 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:37.089 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.348 nvme0n1 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.348 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.349 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.608 nvme0n1 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.608 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.867 nvme0n1 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.867 19:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.127 nvme0n1 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.127 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.386 nvme0n1 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.386 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.387 nvme0n1 00:26:38.387 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.646 nvme0n1 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.646 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 nvme0n1 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.906 19:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.165 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.165 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.166 nvme0n1 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.166 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.426 nvme0n1 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.426 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.686 nvme0n1 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.686 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.945 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.946 19:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.204 nvme0n1 00:26:40.204 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.204 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.204 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.205 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.464 nvme0n1 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.464 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.724 nvme0n1 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.724 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.725 19:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.983 nvme0n1 00:26:40.983 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.984 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.242 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.242 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.242 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:41.242 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.243 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.502 nvme0n1 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.502 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.762 nvme0n1 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.762 19:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.330 nvme0n1 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.330 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.331 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.589 nvme0n1 00:26:42.589 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.589 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.589 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.589 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.589 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.589 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.849 19:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.109 nvme0n1 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.109 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.677 nvme0n1 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.677 19:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.246 nvme0n1 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.246 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.814 nvme0n1 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.814 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.073 19:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.642 nvme0n1 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.642 19:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.235 nvme0n1 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.235 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.810 nvme0n1 00:26:46.810 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.810 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.811 19:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.076 nvme0n1 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.076 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.335 nvme0n1 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:47.335 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.336 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.595 nvme0n1 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.595 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.596 nvme0n1 00:26:47.596 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.596 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.596 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.596 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.596 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.854 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.855 nvme0n1 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.855 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.114 19:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.114 nvme0n1 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.114 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.373 nvme0n1 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.373 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.632 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.632 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.632 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:48.632 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.632 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.632 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:48.632 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.632 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.633 nvme0n1 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.633 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.892 nvme0n1 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.892 19:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.150 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.151 nvme0n1 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:49.151 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.411 nvme0n1 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.411 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.671 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.930 nvme0n1 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.930 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.931 19:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.190 nvme0n1 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.190 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.191 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.191 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.191 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.450 nvme0n1 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.450 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.709 nvme0n1 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.709 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:50.968 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.969 19:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.228 nvme0n1 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.228 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.796 nvme0n1 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.796 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.797 19:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.056 nvme0n1 00:26:52.056 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.056 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.056 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.056 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.056 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.056 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.056 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.315 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.574 nvme0n1 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.574 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.575 19:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.144 nvme0n1 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.144 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.712 nvme0n1 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.712 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.713 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.713 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.713 19:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.281 nvme0n1 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:54.281 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.282 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.541 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.109 nvme0n1 00:26:55.109 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.110 19:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.110 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.678 nvme0n1 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.678 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.679 19:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.247 nvme0n1 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.247 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.506 nvme0n1 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.506 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.507 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.507 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.507 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.507 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.507 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.765 nvme0n1 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.765 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.766 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.025 nvme0n1 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.025 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.026 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.026 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.026 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.026 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.026 19:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.026 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.285 nvme0n1 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.285 nvme0n1 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.285 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.544 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.545 nvme0n1 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.545 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.805 nvme0n1 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.805 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.065 19:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.065 nvme0n1 00:26:58.065 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.065 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.065 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.065 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.066 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.066 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.066 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.066 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.066 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.066 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.325 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.326 nvme0n1 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.326 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.586 nvme0n1 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.586 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.846 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.106 nvme0n1 00:26:59.106 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.106 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.106 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.106 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.106 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.106 19:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.106 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.365 nvme0n1 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.365 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.366 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.624 nvme0n1 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.624 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.625 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.884 nvme0n1 00:26:59.884 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.884 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.884 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.884 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.884 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.884 19:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.143 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.144 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.403 nvme0n1 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.403 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.404 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.663 nvme0n1 00:27:00.663 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.663 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.663 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.663 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.663 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.663 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.922 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.923 19:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.182 nvme0n1 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.182 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.183 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.809 nvme0n1 00:27:01.809 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.809 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.809 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.810 19:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.162 nvme0n1 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.162 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 nvme0n1 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjg0NWJhYWI4ZjU2ZTQzNGI5NGViMWM4NTE0ZjNkOTc8meYH: 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlNjE2ZWNkNDgzMzk3MmQyZTNhNDBjMWI4YWZhYmZiZTFiYTI1OTMwZTJiOTM3MzE1MWQzYjAzZTRkZWNiOD755v0=: 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.732 19:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.301 nvme0n1 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.301 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.869 nvme0n1 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.869 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.870 19:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.437 nvme0n1 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM2Y2E4ZTFiOGU0ZjU0ZjQzYzg0ZGIyM2NkMTBhM2MwNDRmOTJkYzIxZTMwZmIyhnc/Uw==: 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: ]] 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQxOWM4MGY0YjVjZTMyNTBiNDFlYWFmY2RhZDA0NWWbcUU6: 00:27:04.437 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:04.438 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.438 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.438 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.438 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.438 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.438 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.438 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.438 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.696 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.697 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.697 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.697 19:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.264 nvme0n1 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY3MzViODljMTkyYjM2MzQ3MzNmZTZkOGE3YWFiYTg2ZGZlMjU4N2FlMGFjM2M3MWEyMTlmZjVjOTUzZTJhN8BdEcs=: 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.264 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.265 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.833 nvme0n1 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.833 request: 00:27:05.833 { 00:27:05.833 "name": "nvme0", 00:27:05.833 "trtype": "tcp", 00:27:05.833 "traddr": "10.0.0.1", 00:27:05.833 "adrfam": "ipv4", 00:27:05.833 "trsvcid": "4420", 00:27:05.833 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:05.833 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:05.833 "prchk_reftag": false, 00:27:05.833 "prchk_guard": false, 00:27:05.833 "hdgst": false, 00:27:05.833 "ddgst": false, 00:27:05.833 "allow_unrecognized_csi": false, 00:27:05.833 "method": "bdev_nvme_attach_controller", 00:27:05.833 "req_id": 1 00:27:05.833 } 00:27:05.833 Got JSON-RPC error response 00:27:05.833 response: 00:27:05.833 { 00:27:05.833 "code": -5, 00:27:05.833 "message": "Input/output error" 00:27:05.833 } 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:05.833 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:06.093 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.093 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:06.093 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.093 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:06.093 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.093 19:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.093 request: 00:27:06.093 { 00:27:06.093 "name": "nvme0", 00:27:06.093 "trtype": "tcp", 00:27:06.093 "traddr": "10.0.0.1", 00:27:06.093 "adrfam": "ipv4", 00:27:06.093 "trsvcid": "4420", 00:27:06.093 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:06.093 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:06.093 "prchk_reftag": false, 00:27:06.093 "prchk_guard": false, 00:27:06.093 "hdgst": false, 00:27:06.093 "ddgst": false, 00:27:06.093 "dhchap_key": "key2", 00:27:06.093 "allow_unrecognized_csi": false, 00:27:06.093 "method": "bdev_nvme_attach_controller", 00:27:06.093 "req_id": 1 00:27:06.093 } 00:27:06.093 Got JSON-RPC error response 00:27:06.093 response: 00:27:06.093 { 00:27:06.093 "code": -5, 00:27:06.093 "message": "Input/output error" 00:27:06.093 } 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.093 request: 00:27:06.093 { 00:27:06.093 "name": "nvme0", 00:27:06.093 "trtype": "tcp", 00:27:06.093 "traddr": "10.0.0.1", 00:27:06.093 "adrfam": "ipv4", 00:27:06.093 "trsvcid": "4420", 00:27:06.093 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:06.093 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:06.093 "prchk_reftag": false, 00:27:06.093 "prchk_guard": false, 00:27:06.093 "hdgst": false, 00:27:06.093 "ddgst": false, 00:27:06.093 "dhchap_key": "key1", 00:27:06.093 "dhchap_ctrlr_key": "ckey2", 00:27:06.093 "allow_unrecognized_csi": false, 00:27:06.093 "method": "bdev_nvme_attach_controller", 00:27:06.093 "req_id": 1 00:27:06.093 } 00:27:06.093 Got JSON-RPC error response 00:27:06.093 response: 00:27:06.093 { 00:27:06.093 "code": -5, 00:27:06.093 "message": "Input/output error" 00:27:06.093 } 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.093 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.352 nvme0n1 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.352 request: 00:27:06.352 { 00:27:06.352 "name": "nvme0", 00:27:06.352 "dhchap_key": "key1", 00:27:06.352 "dhchap_ctrlr_key": "ckey2", 00:27:06.352 "method": "bdev_nvme_set_keys", 00:27:06.352 "req_id": 1 00:27:06.352 } 00:27:06.352 Got JSON-RPC error response 00:27:06.352 response: 00:27:06.352 { 00:27:06.352 "code": -13, 00:27:06.352 "message": "Permission denied" 00:27:06.352 } 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:06.352 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:06.611 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:06.611 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.611 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.611 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.611 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.611 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:06.611 19:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:07.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:07.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:07.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.483 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg3ZTBiZWFkODU4NTEyOGFlM2YxMGFlOWFmNzQ5ZDU2MzQ5ZjgzNjBkOGZiNjA5jGK7uw==: 00:27:08.484 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: ]] 00:27:08.484 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2U5NmMyOGZlMzg2ZWUzMjM0ODZkZDlkNDZmYWQ0MGE1MTk1MTBiZjA4NWQ4ZjY0XndWRw==: 00:27:08.484 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:08.484 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.484 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.484 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.743 nvme0n1 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBjYzAzNDZjMDc3NmNjMWFmNTA0ZTE3YTJhYmRkYWPQgQCk: 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: ]] 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTVjNzYyY2IzNmFiMzFiYmNkZWM0OWFmMTUzZGQxOTnx5NDt: 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:08.743 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.744 request: 00:27:08.744 { 00:27:08.744 "name": "nvme0", 00:27:08.744 "dhchap_key": "key2", 00:27:08.744 "dhchap_ctrlr_key": "ckey1", 00:27:08.744 "method": "bdev_nvme_set_keys", 00:27:08.744 "req_id": 1 00:27:08.744 } 00:27:08.744 Got JSON-RPC error response 00:27:08.744 response: 00:27:08.744 { 00:27:08.744 "code": -13, 00:27:08.744 "message": "Permission denied" 00:27:08.744 } 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.744 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.003 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:09.003 19:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.940 rmmod nvme_tcp 00:27:09.940 rmmod nvme_fabrics 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.940 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:09.941 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:09.941 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3877190 ']' 00:27:09.941 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3877190 00:27:09.941 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3877190 ']' 00:27:09.941 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3877190 00:27:09.941 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:09.941 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.941 19:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3877190 00:27:09.941 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:09.941 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:09.941 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3877190' 00:27:09.941 killing process with pid 3877190 00:27:09.941 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3877190 00:27:09.941 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3877190 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.200 19:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.105 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:12.364 19:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.653 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:15.653 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:16.590 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:16.849 19:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Hmr /tmp/spdk.key-null.t2Z /tmp/spdk.key-sha256.p0v /tmp/spdk.key-sha384.QaF /tmp/spdk.key-sha512.JQN /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:16.849 19:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:19.386 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:19.386 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:19.386 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:19.386 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:19.386 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:19.386 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:19.386 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:19.386 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:19.648 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:19.648 00:27:19.648 real 0m54.387s 00:27:19.648 user 0m48.468s 00:27:19.648 sys 0m12.559s 00:27:19.648 19:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.648 19:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.648 ************************************ 00:27:19.648 END TEST nvmf_auth_host 00:27:19.648 ************************************ 00:27:19.648 19:28:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:19.648 19:28:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:19.648 19:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:19.648 19:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.648 19:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.648 ************************************ 00:27:19.648 START TEST nvmf_digest 00:27:19.648 ************************************ 00:27:19.648 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:19.908 * Looking for test storage... 00:27:19.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:19.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.908 --rc genhtml_branch_coverage=1 00:27:19.908 --rc genhtml_function_coverage=1 00:27:19.908 --rc genhtml_legend=1 00:27:19.908 --rc geninfo_all_blocks=1 00:27:19.908 --rc geninfo_unexecuted_blocks=1 00:27:19.908 00:27:19.908 ' 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:19.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.908 --rc genhtml_branch_coverage=1 00:27:19.908 --rc genhtml_function_coverage=1 00:27:19.908 --rc genhtml_legend=1 00:27:19.908 --rc geninfo_all_blocks=1 00:27:19.908 --rc geninfo_unexecuted_blocks=1 00:27:19.908 00:27:19.908 ' 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:19.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.908 --rc genhtml_branch_coverage=1 00:27:19.908 --rc genhtml_function_coverage=1 00:27:19.908 --rc genhtml_legend=1 00:27:19.908 --rc geninfo_all_blocks=1 00:27:19.908 --rc geninfo_unexecuted_blocks=1 00:27:19.908 00:27:19.908 ' 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:19.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.908 --rc genhtml_branch_coverage=1 00:27:19.908 --rc genhtml_function_coverage=1 00:27:19.908 --rc genhtml_legend=1 00:27:19.908 --rc geninfo_all_blocks=1 00:27:19.908 --rc geninfo_unexecuted_blocks=1 00:27:19.908 00:27:19.908 ' 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.908 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.909 19:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:26.481 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:26.481 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:26.481 Found net devices under 0000:86:00.0: cvl_0_0 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:26.481 Found net devices under 0000:86:00.1: cvl_0_1 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:26.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:27:26.481 00:27:26.481 --- 10.0.0.2 ping statistics --- 00:27:26.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.481 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:27:26.481 00:27:26.481 --- 10.0.0.1 ping statistics --- 00:27:26.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.481 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.481 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:26.482 ************************************ 00:27:26.482 START TEST nvmf_digest_clean 00:27:26.482 ************************************ 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3890960 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3890960 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3890960 ']' 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.482 19:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.482 [2024-11-26 19:28:48.946452] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:26.482 [2024-11-26 19:28:48.946503] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.482 [2024-11-26 19:28:49.029488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.482 [2024-11-26 19:28:49.069519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.482 [2024-11-26 19:28:49.069554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.482 [2024-11-26 19:28:49.069560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.482 [2024-11-26 19:28:49.069566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.482 [2024-11-26 19:28:49.069571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.482 [2024-11-26 19:28:49.070136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.741 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:27.001 null0 00:27:27.001 [2024-11-26 19:28:49.904526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.001 [2024-11-26 19:28:49.928740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3891044 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3891044 /var/tmp/bperf.sock 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3891044 ']' 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:27.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.001 19:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:27.001 [2024-11-26 19:28:49.982449] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:27.001 [2024-11-26 19:28:49.982491] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891044 ] 00:27:27.001 [2024-11-26 19:28:50.061687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.001 [2024-11-26 19:28:50.103782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.260 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.260 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:27.260 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:27.260 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:27.260 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:27.519 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.519 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.777 nvme0n1 00:27:27.777 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:27.777 19:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.777 Running I/O for 2 seconds... 00:27:29.719 25122.00 IOPS, 98.13 MiB/s [2024-11-26T18:28:53.091Z] 25617.50 IOPS, 100.07 MiB/s 00:27:29.977 Latency(us) 00:27:29.977 [2024-11-26T18:28:53.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.977 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:29.977 nvme0n1 : 2.00 25628.14 100.11 0.00 0.00 4990.05 2637.04 12170.97 00:27:29.977 [2024-11-26T18:28:53.091Z] =================================================================================================================== 00:27:29.977 [2024-11-26T18:28:53.091Z] Total : 25628.14 100.11 0.00 0.00 4990.05 2637.04 12170.97 00:27:29.977 { 00:27:29.977 "results": [ 00:27:29.977 { 00:27:29.977 "job": "nvme0n1", 00:27:29.977 "core_mask": "0x2", 00:27:29.977 "workload": "randread", 00:27:29.977 "status": "finished", 00:27:29.977 "queue_depth": 128, 00:27:29.977 "io_size": 4096, 00:27:29.977 "runtime": 2.004164, 00:27:29.977 "iops": 25628.142207923105, 00:27:29.977 "mibps": 100.10993049969963, 00:27:29.977 "io_failed": 0, 00:27:29.977 "io_timeout": 0, 00:27:29.977 "avg_latency_us": 4990.04879755021, 00:27:29.977 "min_latency_us": 2637.0438095238096, 00:27:29.977 "max_latency_us": 12170.971428571429 00:27:29.977 } 00:27:29.977 ], 00:27:29.977 "core_count": 1 00:27:29.977 } 00:27:29.977 19:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:29.977 19:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:29.977 19:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:29.977 19:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:29.977 | select(.opcode=="crc32c") 00:27:29.977 | "\(.module_name) \(.executed)"' 00:27:29.977 19:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3891044 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3891044 ']' 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3891044 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.977 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891044 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891044' 00:27:30.236 killing process with pid 3891044 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3891044 00:27:30.236 Received shutdown signal, test time was about 2.000000 seconds 00:27:30.236 00:27:30.236 Latency(us) 00:27:30.236 [2024-11-26T18:28:53.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.236 [2024-11-26T18:28:53.350Z] =================================================================================================================== 00:27:30.236 [2024-11-26T18:28:53.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3891044 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3891680 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3891680 /var/tmp/bperf.sock 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3891680 ']' 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:30.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.236 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:30.236 [2024-11-26 19:28:53.312579] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:30.236 [2024-11-26 19:28:53.312630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891680 ] 00:27:30.236 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:30.236 Zero copy mechanism will not be used. 00:27:30.493 [2024-11-26 19:28:53.385612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.493 [2024-11-26 19:28:53.427341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.493 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.493 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:30.493 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:30.493 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:30.493 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:30.751 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.751 19:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.009 nvme0n1 00:27:31.009 19:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:31.009 19:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.268 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:31.268 Zero copy mechanism will not be used. 00:27:31.268 Running I/O for 2 seconds... 00:27:33.142 6128.00 IOPS, 766.00 MiB/s [2024-11-26T18:28:56.256Z] 5890.50 IOPS, 736.31 MiB/s 00:27:33.142 Latency(us) 00:27:33.142 [2024-11-26T18:28:56.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.142 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:33.142 nvme0n1 : 2.00 5889.84 736.23 0.00 0.00 2713.76 616.35 5617.37 00:27:33.142 [2024-11-26T18:28:56.256Z] =================================================================================================================== 00:27:33.142 [2024-11-26T18:28:56.256Z] Total : 5889.84 736.23 0.00 0.00 2713.76 616.35 5617.37 00:27:33.142 { 00:27:33.142 "results": [ 00:27:33.142 { 00:27:33.142 "job": "nvme0n1", 00:27:33.142 "core_mask": "0x2", 00:27:33.142 "workload": "randread", 00:27:33.142 "status": "finished", 00:27:33.142 "queue_depth": 16, 00:27:33.142 "io_size": 131072, 00:27:33.142 "runtime": 2.002942, 00:27:33.142 "iops": 5889.836051168731, 00:27:33.142 "mibps": 736.2295063960913, 00:27:33.142 "io_failed": 0, 00:27:33.142 "io_timeout": 0, 00:27:33.142 "avg_latency_us": 2713.7565324517536, 00:27:33.142 "min_latency_us": 616.3504761904762, 00:27:33.142 "max_latency_us": 5617.371428571429 00:27:33.142 } 00:27:33.142 ], 00:27:33.142 "core_count": 1 00:27:33.142 } 00:27:33.142 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:33.142 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:33.142 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:33.142 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:33.142 | select(.opcode=="crc32c") 00:27:33.142 | "\(.module_name) \(.executed)"' 00:27:33.142 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3891680 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3891680 ']' 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3891680 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891680 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891680' 00:27:33.401 killing process with pid 3891680 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3891680 00:27:33.401 Received shutdown signal, test time was about 2.000000 seconds 00:27:33.401 00:27:33.401 Latency(us) 00:27:33.401 [2024-11-26T18:28:56.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.401 [2024-11-26T18:28:56.515Z] =================================================================================================================== 00:27:33.401 [2024-11-26T18:28:56.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.401 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3891680 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3892162 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3892162 /var/tmp/bperf.sock 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3892162 ']' 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:33.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.660 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:33.660 [2024-11-26 19:28:56.688915] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:33.660 [2024-11-26 19:28:56.688964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892162 ] 00:27:33.660 [2024-11-26 19:28:56.762662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.920 [2024-11-26 19:28:56.803240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.920 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.920 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:33.920 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:33.920 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:33.920 19:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:34.179 19:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.179 19:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.437 nvme0n1 00:27:34.437 19:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:34.437 19:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:34.696 Running I/O for 2 seconds... 00:27:36.570 28405.00 IOPS, 110.96 MiB/s [2024-11-26T18:28:59.684Z] 28521.00 IOPS, 111.41 MiB/s 00:27:36.570 Latency(us) 00:27:36.570 [2024-11-26T18:28:59.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.570 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:36.570 nvme0n1 : 2.00 28537.33 111.47 0.00 0.00 4479.85 1771.03 10360.93 00:27:36.570 [2024-11-26T18:28:59.684Z] =================================================================================================================== 00:27:36.570 [2024-11-26T18:28:59.684Z] Total : 28537.33 111.47 0.00 0.00 4479.85 1771.03 10360.93 00:27:36.570 { 00:27:36.570 "results": [ 00:27:36.570 { 00:27:36.570 "job": "nvme0n1", 00:27:36.570 "core_mask": "0x2", 00:27:36.570 "workload": "randwrite", 00:27:36.570 "status": "finished", 00:27:36.570 "queue_depth": 128, 00:27:36.570 "io_size": 4096, 00:27:36.570 "runtime": 2.004953, 00:27:36.570 "iops": 28537.327308919463, 00:27:36.570 "mibps": 111.47393480046665, 00:27:36.570 "io_failed": 0, 00:27:36.570 "io_timeout": 0, 00:27:36.570 "avg_latency_us": 4479.854914243103, 00:27:36.570 "min_latency_us": 1771.032380952381, 00:27:36.570 "max_latency_us": 10360.929523809524 00:27:36.570 } 00:27:36.570 ], 00:27:36.570 "core_count": 1 00:27:36.570 } 00:27:36.570 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:36.570 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:36.570 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:36.570 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:36.570 | select(.opcode=="crc32c") 00:27:36.570 | "\(.module_name) \(.executed)"' 00:27:36.570 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3892162 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3892162 ']' 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3892162 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3892162 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3892162' 00:27:36.829 killing process with pid 3892162 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3892162 00:27:36.829 Received shutdown signal, test time was about 2.000000 seconds 00:27:36.829 00:27:36.829 Latency(us) 00:27:36.829 [2024-11-26T18:28:59.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.829 [2024-11-26T18:28:59.943Z] =================================================================================================================== 00:27:36.829 [2024-11-26T18:28:59.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.829 19:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3892162 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3892731 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3892731 /var/tmp/bperf.sock 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3892731 ']' 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:37.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.088 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:37.088 [2024-11-26 19:29:00.069742] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:37.088 [2024-11-26 19:29:00.069793] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892731 ] 00:27:37.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.088 Zero copy mechanism will not be used. 00:27:37.088 [2024-11-26 19:29:00.131172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.088 [2024-11-26 19:29:00.171082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.347 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.347 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:37.347 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:37.347 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:37.347 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:37.607 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.607 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.866 nvme0n1 00:27:37.866 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:37.866 19:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.866 Zero copy mechanism will not be used. 00:27:37.866 Running I/O for 2 seconds... 00:27:40.181 6428.00 IOPS, 803.50 MiB/s [2024-11-26T18:29:03.295Z] 6319.00 IOPS, 789.88 MiB/s 00:27:40.181 Latency(us) 00:27:40.181 [2024-11-26T18:29:03.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.181 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:40.181 nvme0n1 : 2.00 6317.44 789.68 0.00 0.00 2528.60 1685.21 6959.30 00:27:40.181 [2024-11-26T18:29:03.295Z] =================================================================================================================== 00:27:40.181 [2024-11-26T18:29:03.295Z] Total : 6317.44 789.68 0.00 0.00 2528.60 1685.21 6959.30 00:27:40.181 { 00:27:40.181 "results": [ 00:27:40.181 { 00:27:40.181 "job": "nvme0n1", 00:27:40.181 "core_mask": "0x2", 00:27:40.181 "workload": "randwrite", 00:27:40.181 "status": "finished", 00:27:40.181 "queue_depth": 16, 00:27:40.181 "io_size": 131072, 00:27:40.181 "runtime": 2.002868, 00:27:40.181 "iops": 6317.440789907273, 00:27:40.181 "mibps": 789.6800987384091, 00:27:40.181 "io_failed": 0, 00:27:40.181 "io_timeout": 0, 00:27:40.181 "avg_latency_us": 2528.598073259494, 00:27:40.181 "min_latency_us": 1685.2114285714285, 00:27:40.181 "max_latency_us": 6959.299047619048 00:27:40.181 } 00:27:40.181 ], 00:27:40.181 "core_count": 1 00:27:40.181 } 00:27:40.181 19:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:40.181 19:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:40.181 19:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:40.181 19:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:40.181 | select(.opcode=="crc32c") 00:27:40.181 | "\(.module_name) \(.executed)"' 00:27:40.181 19:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3892731 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3892731 ']' 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3892731 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3892731 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3892731' 00:27:40.181 killing process with pid 3892731 00:27:40.181 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3892731 00:27:40.181 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.181 00:27:40.182 Latency(us) 00:27:40.182 [2024-11-26T18:29:03.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.182 [2024-11-26T18:29:03.296Z] =================================================================================================================== 00:27:40.182 [2024-11-26T18:29:03.296Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.182 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3892731 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3890960 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3890960 ']' 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3890960 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3890960 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3890960' 00:27:40.441 killing process with pid 3890960 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3890960 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3890960 00:27:40.441 00:27:40.441 real 0m14.664s 00:27:40.441 user 0m27.691s 00:27:40.441 sys 0m4.434s 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.441 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:40.441 ************************************ 00:27:40.441 END TEST nvmf_digest_clean 00:27:40.441 ************************************ 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:40.699 ************************************ 00:27:40.699 START TEST nvmf_digest_error 00:27:40.699 ************************************ 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3893343 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3893343 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3893343 ']' 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.699 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.699 [2024-11-26 19:29:03.679867] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:40.699 [2024-11-26 19:29:03.679913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.699 [2024-11-26 19:29:03.758602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.699 [2024-11-26 19:29:03.795187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.699 [2024-11-26 19:29:03.795220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.699 [2024-11-26 19:29:03.795227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.699 [2024-11-26 19:29:03.795233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.699 [2024-11-26 19:29:03.795237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.699 [2024-11-26 19:29:03.795797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.958 [2024-11-26 19:29:03.872247] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.958 null0 00:27:40.958 [2024-11-26 19:29:03.968644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.958 [2024-11-26 19:29:03.992856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3893369 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3893369 /var/tmp/bperf.sock 00:27:40.958 19:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:40.958 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3893369 ']' 00:27:40.958 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.958 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.958 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.958 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.958 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.958 [2024-11-26 19:29:04.047301] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:40.958 [2024-11-26 19:29:04.047339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893369 ] 00:27:41.219 [2024-11-26 19:29:04.121715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.219 [2024-11-26 19:29:04.161895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.219 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.219 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:41.219 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.219 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.478 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:41.478 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.478 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.478 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.478 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.478 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.749 nvme0n1 00:27:41.749 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:41.749 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.749 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.749 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.749 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:41.749 19:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.008 Running I/O for 2 seconds... 00:27:42.008 [2024-11-26 19:29:04.904089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.904124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.904134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:04.914411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.914434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.914443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:04.925439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.925460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.925469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:04.934263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.934284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.934292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:04.945866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.945887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.945896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:04.957600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.957621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.957630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:04.969649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.969675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.969684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:04.980646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.980666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.980683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:04.990874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:04.990893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:04.990901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:05.000759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:05.000779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:05.000787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:05.009630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:05.009650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:05.009658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:05.021131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:05.021151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:05.021158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:05.029055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:05.029074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:05.029081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:05.040220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:05.040240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:05.040248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:05.052794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.008 [2024-11-26 19:29:05.052814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.008 [2024-11-26 19:29:05.052822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.008 [2024-11-26 19:29:05.064961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.009 [2024-11-26 19:29:05.064981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.009 [2024-11-26 19:29:05.064989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.009 [2024-11-26 19:29:05.074100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.009 [2024-11-26 19:29:05.074122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.009 [2024-11-26 19:29:05.074130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.009 [2024-11-26 19:29:05.082249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.009 [2024-11-26 19:29:05.082269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.009 [2024-11-26 19:29:05.082277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.009 [2024-11-26 19:29:05.092640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.009 [2024-11-26 19:29:05.092659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.009 [2024-11-26 19:29:05.092667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.009 [2024-11-26 19:29:05.101987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.009 [2024-11-26 19:29:05.102007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.009 [2024-11-26 19:29:05.102015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.009 [2024-11-26 19:29:05.111845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.009 [2024-11-26 19:29:05.111865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.009 [2024-11-26 19:29:05.111873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.269 [2024-11-26 19:29:05.121680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.269 [2024-11-26 19:29:05.121699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.269 [2024-11-26 19:29:05.121707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.269 [2024-11-26 19:29:05.130280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.269 [2024-11-26 19:29:05.130300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.269 [2024-11-26 19:29:05.130307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.269 [2024-11-26 19:29:05.140483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.269 [2024-11-26 19:29:05.140503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.269 [2024-11-26 19:29:05.140510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.269 [2024-11-26 19:29:05.149010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.269 [2024-11-26 19:29:05.149029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.269 [2024-11-26 19:29:05.149037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.269 [2024-11-26 19:29:05.158177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.269 [2024-11-26 19:29:05.158197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.269 [2024-11-26 19:29:05.158204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.269 [2024-11-26 19:29:05.168615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.269 [2024-11-26 19:29:05.168633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.269 [2024-11-26 19:29:05.168641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.269 [2024-11-26 19:29:05.177254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.269 [2024-11-26 19:29:05.177274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.269 [2024-11-26 19:29:05.177281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.269 [2024-11-26 19:29:05.186594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.269 [2024-11-26 19:29:05.186613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.269 [2024-11-26 19:29:05.186621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.197238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.197258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.197265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.205222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.205241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.205249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.216297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.216318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.216325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.228023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.228043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.228051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.237846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.237865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.237876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.247201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.247220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.247228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.255997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.256016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.256024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.265475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.265495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.265503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.275444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.275463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.275471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.283993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.284012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.284021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.293324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.293342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.293350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.305201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.305221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.305229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.313642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.313662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.313675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.326719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.326745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.326753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.334872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.334891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.334899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.346935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.346955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.346962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.357071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.357091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.357099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.365843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.365862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.365870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.270 [2024-11-26 19:29:05.378424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.270 [2024-11-26 19:29:05.378443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.270 [2024-11-26 19:29:05.378451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.390265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.390286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.390293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.402566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.402586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.402594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.414044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.414065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.414073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.427506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.427536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.427544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.435876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.435895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.435903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.446146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.446164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.446172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.455882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.455902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.455909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.464578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.464597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.464604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.473858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.473877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.473885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.482987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.483006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.483014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.492172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.492191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.492199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.501704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.530 [2024-11-26 19:29:05.501724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.530 [2024-11-26 19:29:05.501735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.530 [2024-11-26 19:29:05.511018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.511038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.511045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.520816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.520835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.520843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.528892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.528911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.528919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.541117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.541136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.541144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.552622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.552643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.552651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.560831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.560851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.560859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.572120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.572139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.572147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.581248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.581267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.581275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.592589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.592609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.592617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.601552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.601572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.601580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.613158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.613178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.613186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.621814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.621833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.621840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.531 [2024-11-26 19:29:05.632828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.531 [2024-11-26 19:29:05.632848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.531 [2024-11-26 19:29:05.632856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.644087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.644108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.644116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.653494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.653514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.653522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.664305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.664325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.664333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.673791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.673811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.673823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.683230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.683249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.683257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.693396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.693416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.693424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.701787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.701806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.701814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.713096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.713117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.713125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.723962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.723983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.723991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.731798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.731819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.731827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.741785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.741806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.741814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.751189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.751209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.751216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.761115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.761139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.761147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.769930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.791 [2024-11-26 19:29:05.769950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.791 [2024-11-26 19:29:05.769958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.791 [2024-11-26 19:29:05.780842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.780862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.780870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.792297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.792318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.792328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.801638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.801658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.801666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.812606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.812626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.812634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.822525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.822545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.822553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.830976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.830996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.831005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.841098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.841118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.841125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.851411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.851431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.851438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.860543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.860563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.860571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.871316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.871336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.871344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 [2024-11-26 19:29:05.882805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.882825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.882832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.792 25151.00 IOPS, 98.25 MiB/s [2024-11-26T18:29:05.906Z] [2024-11-26 19:29:05.895347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:42.792 [2024-11-26 19:29:05.895367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.792 [2024-11-26 19:29:05.895375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.903907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.903928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.903936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.916435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.916456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.916465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.924898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.924917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.924924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.937148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.937168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.937180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.948779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.948799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.948806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.957381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.957401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.957408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.966381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.966401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.966409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.978305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.978325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.978333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:05.990376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:05.990396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:05.990404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.002396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.002416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.002423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.011577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.011597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.011605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.020023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.020042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.020050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.029093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.029113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.029121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.041293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.041312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.041320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.049161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.049181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.049188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.060555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.060574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.060582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.072643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.072662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.072674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.085260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.085280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.085288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.096166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.096185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.096193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.105194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.105214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.105221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.115596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.115615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.115626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.127068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.127087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-11-26 19:29:06.127095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-11-26 19:29:06.135459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.052 [2024-11-26 19:29:06.135478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-11-26 19:29:06.135485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.053 [2024-11-26 19:29:06.148474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.053 [2024-11-26 19:29:06.148493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-11-26 19:29:06.148501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.053 [2024-11-26 19:29:06.160367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.053 [2024-11-26 19:29:06.160387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-11-26 19:29:06.160395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.171142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.171161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.171170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.181183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.181202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.181210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.189520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.189539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.189547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.200393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.200413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.200421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.208409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.208431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.208440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.220290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.220310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.220317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.231842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.231862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.231869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.240838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.240858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.240867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.249634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.249655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.249662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.260833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.260852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.260860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.269482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.269510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.282410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.282429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.282438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.294314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.294334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.294342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.305341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.305361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.305368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.317799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.317819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.317827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.326043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.326063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.326071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.337548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.337567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.337575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.347889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.347909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.347918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.356651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.356676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.356685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.368364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.368383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.368391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.376803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.376822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.376829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.313 [2024-11-26 19:29:06.389471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.313 [2024-11-26 19:29:06.389491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.313 [2024-11-26 19:29:06.389502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.314 [2024-11-26 19:29:06.399878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.314 [2024-11-26 19:29:06.399898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.314 [2024-11-26 19:29:06.399905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.314 [2024-11-26 19:29:06.407152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.314 [2024-11-26 19:29:06.407171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.314 [2024-11-26 19:29:06.407179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.314 [2024-11-26 19:29:06.418994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.314 [2024-11-26 19:29:06.419013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.314 [2024-11-26 19:29:06.419020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.431111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.431130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.431138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.443048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.443068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.443075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.455262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.455282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.455289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.465304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.465323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.465331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.473789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.473808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.473816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.482835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.482858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.482866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.492760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.492781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.492788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.502319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.502338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.502346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.510576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.510594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.510602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.521229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.521249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.521257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.533404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.533423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.533431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.544754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.544773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.544781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.552969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.552997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.565276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.565299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.565308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.575073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.575094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.575102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.583762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.583782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.583790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.595217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.595237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.595246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.603767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.603786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.603793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.614486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.614506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.614514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.624711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.624731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.624739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.632995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.633015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.633023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.642308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.642329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.642336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.652150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.652170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.621 [2024-11-26 19:29:06.652183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.621 [2024-11-26 19:29:06.661230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.621 [2024-11-26 19:29:06.661250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.622 [2024-11-26 19:29:06.661258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.622 [2024-11-26 19:29:06.670974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.622 [2024-11-26 19:29:06.670994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.622 [2024-11-26 19:29:06.671001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.622 [2024-11-26 19:29:06.680597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.622 [2024-11-26 19:29:06.680616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.622 [2024-11-26 19:29:06.680624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.622 [2024-11-26 19:29:06.690958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.622 [2024-11-26 19:29:06.690978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.622 [2024-11-26 19:29:06.690986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.622 [2024-11-26 19:29:06.700784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.622 [2024-11-26 19:29:06.700804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.622 [2024-11-26 19:29:06.700812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.622 [2024-11-26 19:29:06.711596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.622 [2024-11-26 19:29:06.711615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.622 [2024-11-26 19:29:06.711624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.622 [2024-11-26 19:29:06.723936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.622 [2024-11-26 19:29:06.723954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.622 [2024-11-26 19:29:06.723962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.736872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.736892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.736899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.745901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.745920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.745929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.756793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.756814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.756821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.766237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.766256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.766265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.774449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.774469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.774477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.784425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.784445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.784452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.793656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.793681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.793689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.803060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.803079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.803087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.812117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.812137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.812144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.821417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.821436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.821447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.829951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.829971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.829979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.840294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.840314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.840322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.850732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.850751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.850759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.859392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.859411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.859418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.869578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.869598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.869606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.880601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.880621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.880628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 [2024-11-26 19:29:06.889245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf1d6b0) 00:27:43.882 [2024-11-26 19:29:06.889264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.882 [2024-11-26 19:29:06.889272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.882 24999.00 IOPS, 97.65 MiB/s 00:27:43.882 Latency(us) 00:27:43.882 [2024-11-26T18:29:06.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.882 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:43.882 nvme0n1 : 2.01 25019.36 97.73 0.00 0.00 5109.56 2637.04 17601.10 00:27:43.882 [2024-11-26T18:29:06.996Z] =================================================================================================================== 00:27:43.882 [2024-11-26T18:29:06.996Z] Total : 25019.36 97.73 0.00 0.00 5109.56 2637.04 17601.10 00:27:43.882 { 00:27:43.882 "results": [ 00:27:43.882 { 00:27:43.882 "job": "nvme0n1", 00:27:43.882 "core_mask": "0x2", 00:27:43.882 "workload": "randread", 00:27:43.882 "status": "finished", 00:27:43.882 "queue_depth": 128, 00:27:43.882 "io_size": 4096, 00:27:43.882 "runtime": 2.005247, 00:27:43.883 "iops": 25019.361704568066, 00:27:43.883 "mibps": 97.731881658469, 00:27:43.883 "io_failed": 0, 00:27:43.883 "io_timeout": 0, 00:27:43.883 "avg_latency_us": 5109.564155395465, 00:27:43.883 "min_latency_us": 2637.0438095238096, 00:27:43.883 "max_latency_us": 17601.097142857143 00:27:43.883 } 00:27:43.883 ], 00:27:43.883 "core_count": 1 00:27:43.883 } 00:27:43.883 19:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:43.883 19:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:43.883 19:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:43.883 19:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:43.883 | .driver_specific 00:27:43.883 | .nvme_error 00:27:43.883 | .status_code 00:27:43.883 | .command_transient_transport_error' 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3893369 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3893369 ']' 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3893369 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3893369 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3893369' 00:27:44.142 killing process with pid 3893369 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3893369 00:27:44.142 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.142 00:27:44.142 Latency(us) 00:27:44.142 [2024-11-26T18:29:07.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.142 [2024-11-26T18:29:07.256Z] =================================================================================================================== 00:27:44.142 [2024-11-26T18:29:07.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.142 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3893369 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3894015 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3894015 /var/tmp/bperf.sock 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3894015 ']' 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.401 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.401 [2024-11-26 19:29:07.375477] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:44.401 [2024-11-26 19:29:07.375525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894015 ] 00:27:44.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:44.401 Zero copy mechanism will not be used. 00:27:44.401 [2024-11-26 19:29:07.449565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.401 [2024-11-26 19:29:07.491270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.660 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.660 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:44.660 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.660 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.918 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:44.918 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.918 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.918 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.918 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.918 19:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.178 nvme0n1 00:27:45.178 19:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:45.178 19:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.178 19:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.178 19:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.178 19:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:45.178 19:29:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.178 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:45.178 Zero copy mechanism will not be used. 00:27:45.178 Running I/O for 2 seconds... 00:27:45.178 [2024-11-26 19:29:08.183655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.183699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.183710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.189311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.189335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.189343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.194951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.194973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.194981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.200157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.200177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.200185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.203173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.203192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.203200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.208779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.208799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.208807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.214338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.214358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.214367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.219844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.219864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.219872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.225235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.225256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.225264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.230680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.230700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.230709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.236112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.236132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.236140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.241376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.241396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.241404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.246815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.246835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.252380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.178 [2024-11-26 19:29:08.252400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.178 [2024-11-26 19:29:08.252408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.178 [2024-11-26 19:29:08.257765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.179 [2024-11-26 19:29:08.257785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.179 [2024-11-26 19:29:08.257793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.179 [2024-11-26 19:29:08.263121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.179 [2024-11-26 19:29:08.263141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.179 [2024-11-26 19:29:08.263149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.179 [2024-11-26 19:29:08.268486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.179 [2024-11-26 19:29:08.268507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.179 [2024-11-26 19:29:08.268515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.179 [2024-11-26 19:29:08.274157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.179 [2024-11-26 19:29:08.274177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.179 [2024-11-26 19:29:08.274188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.179 [2024-11-26 19:29:08.279586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.179 [2024-11-26 19:29:08.279608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.179 [2024-11-26 19:29:08.279615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.179 [2024-11-26 19:29:08.284717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.179 [2024-11-26 19:29:08.284737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.179 [2024-11-26 19:29:08.284746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.179 [2024-11-26 19:29:08.290187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.179 [2024-11-26 19:29:08.290208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.179 [2024-11-26 19:29:08.290216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.295727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.295748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.295756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.302005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.302026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.302035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.310076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.310098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.310106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.317880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.317902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.317910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.326346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.326367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.326375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.334181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.334207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.334215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.342415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.342437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.342445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.350395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.350417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.350425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.358170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.358193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.358201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.366457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.366481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.366491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.374703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.374726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.374734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.382945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.382966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.382974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.390762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.390784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.390792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.398705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.398727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.398735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.407200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.407221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.440 [2024-11-26 19:29:08.407229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.440 [2024-11-26 19:29:08.414203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.440 [2024-11-26 19:29:08.414224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.414232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.421137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.421159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.421167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.427921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.427944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.427952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.434772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.434795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.434803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.440887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.440909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.440918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.446452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.446474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.446483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.451795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.451817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.451825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.457835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.457857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.457869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.465292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.465314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.465323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.473050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.473073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.473082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.479877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.479900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.479909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.483420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.483442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.483449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.489590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.489611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.489619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.497064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.497086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.497094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.503696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.503716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.503724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.510498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.510520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.510528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.517913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.517936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.517944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.525416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.525438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.525446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.532800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.532822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.532830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.540397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.540419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.540428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.441 [2024-11-26 19:29:08.547677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.441 [2024-11-26 19:29:08.547699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.441 [2024-11-26 19:29:08.547708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.555021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.555042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.555049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.563111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.563133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.563141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.570684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.570705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.570713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.578437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.578459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.578472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.586545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.586566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.586574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.593732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.593754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.593762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.600579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.600601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.600609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.607368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.607389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.607397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.615377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.615399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.615407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.623225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.623246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.623254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.630424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.630446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.701 [2024-11-26 19:29:08.630454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.701 [2024-11-26 19:29:08.636993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.701 [2024-11-26 19:29:08.637013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.637021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.642150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.642175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.642182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.647343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.647363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.647372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.652288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.652308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.652316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.657683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.657704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.657712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.663177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.663197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.663205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.668385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.668405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.668413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.673578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.673599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.673606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.678793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.678813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.678821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.683945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.683965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.683973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.689076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.689097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.689105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.693979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.694000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.694008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.699169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.699190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.699198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.704437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.704458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.704466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.709652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.709678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.709686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.714910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.714931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.714938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.720160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.720180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.720189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.725372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.725392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.725400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.730568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.730588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.730599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.735780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.735799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.735807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.741031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.741051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.741058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.746252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.746272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.746280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.751415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.751436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.751443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.756616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.756636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.756644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.761891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.761912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.761920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.767187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.767207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.767216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.772412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.772433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.702 [2024-11-26 19:29:08.772440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.702 [2024-11-26 19:29:08.777642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.702 [2024-11-26 19:29:08.777666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.703 [2024-11-26 19:29:08.777681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.703 [2024-11-26 19:29:08.782890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.703 [2024-11-26 19:29:08.782910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.703 [2024-11-26 19:29:08.782918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.703 [2024-11-26 19:29:08.788124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.703 [2024-11-26 19:29:08.788146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.703 [2024-11-26 19:29:08.788153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.703 [2024-11-26 19:29:08.793346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.703 [2024-11-26 19:29:08.793366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.703 [2024-11-26 19:29:08.793374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.703 [2024-11-26 19:29:08.798632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.703 [2024-11-26 19:29:08.798653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.703 [2024-11-26 19:29:08.798662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.703 [2024-11-26 19:29:08.803929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.703 [2024-11-26 19:29:08.803950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.703 [2024-11-26 19:29:08.803958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.703 [2024-11-26 19:29:08.809184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.703 [2024-11-26 19:29:08.809206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.703 [2024-11-26 19:29:08.809213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.963 [2024-11-26 19:29:08.814479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.963 [2024-11-26 19:29:08.814501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.963 [2024-11-26 19:29:08.814509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.963 [2024-11-26 19:29:08.819866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.963 [2024-11-26 19:29:08.819886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.963 [2024-11-26 19:29:08.819894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.963 [2024-11-26 19:29:08.825084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.825105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.825113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.830325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.830345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.830354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.835549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.835569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.835577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.840760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.840781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.840789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.845952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.845972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.845980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.851202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.851221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.851228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.856401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.856421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.856428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.861732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.861752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.861760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.866932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.866952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.866964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.872155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.872175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.872182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.877321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.877340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.877348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.882580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.882600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.882607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.887827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.887847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.887855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.893018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.893039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.893047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.898292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.898312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.898320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.903480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.903500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.903507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.908768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.908788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.908795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.914040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.914064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.914072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.919338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.919358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.919366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.924279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.924299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.924306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.929336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.929356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.929364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.934375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.934395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.934403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.939431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.939451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.939459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.944505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.944525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.944533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.949770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.949791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.949799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.955035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.964 [2024-11-26 19:29:08.955055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.964 [2024-11-26 19:29:08.955063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.964 [2024-11-26 19:29:08.960308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.960328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.960336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:08.965631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.965652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.965660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:08.970933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.970954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.970961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:08.976120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.976139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.976148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:08.981238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.981258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.981265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:08.986453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.986473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.986480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:08.991592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.991611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.991619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:08.994423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.994442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.994449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:08.999595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:08.999615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:08.999626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.005440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.005459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.005467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.009981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.010000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.010007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.015197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.015216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.015224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.020408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.020428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.020435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.025674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.025693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.025701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.030771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.030790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.030798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.035939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.035958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.035966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.041142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.041162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.041170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.046276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.046295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.046302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.051472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.051490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.051498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.056659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.056684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.056692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.061856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.061876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.061883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.067029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.067048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.067055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.965 [2024-11-26 19:29:09.072320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:45.965 [2024-11-26 19:29:09.072339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.965 [2024-11-26 19:29:09.072347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.077643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.077662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.077675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.082923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.082943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.082950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.087988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.088010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.088021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.093251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.093271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.093279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.098475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.098495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.098503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.103747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.103768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.103776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.108609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.108630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.108637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.113876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.113896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.113904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.119165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.119186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.119193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.124387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.124407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.124415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.129608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.129628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.225 [2024-11-26 19:29:09.129636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.225 [2024-11-26 19:29:09.134883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.225 [2024-11-26 19:29:09.134909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.134917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.140135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.140156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.140163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.145413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.145432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.145439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.150658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.150683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.150691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.155899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.155918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.155926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.161186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.161206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.161214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.166449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.166470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.166477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.171781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.171802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.171810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.177382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.177402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.177410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.226 5356.00 IOPS, 669.50 MiB/s [2024-11-26T18:29:09.340Z] [2024-11-26 19:29:09.184598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.184619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.184627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.190546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.190567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.190575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.197972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.197994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.198002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.205773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.205795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.205803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.212824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.212846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.212854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.219159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.219180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.219188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.225430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.225451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.225459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.231546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.231566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.231575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.236913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.236934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.236945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.241893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.241914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.241922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.246814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.246835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.246843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.251877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.251898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.251906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.257104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.257125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.257132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.262598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.262618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.262626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.269654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.269683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.269691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.275326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.226 [2024-11-26 19:29:09.275347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.226 [2024-11-26 19:29:09.275354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.226 [2024-11-26 19:29:09.280557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.280576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.280584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.283484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.283503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.283511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.288649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.288675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.288684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.292467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.292487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.292495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.296230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.296250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.296258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.299683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.299702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.299710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.304314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.304335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.304343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.308992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.309012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.309020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.313806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.313826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.313834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.319283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.319302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.319313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.324784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.324805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.324812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.330996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.331016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.331024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.227 [2024-11-26 19:29:09.335875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.227 [2024-11-26 19:29:09.335897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.227 [2024-11-26 19:29:09.335905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.340857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.340877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.340885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.345919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.345940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.345947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.350869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.350890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.350898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.355910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.355930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.355937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.361132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.361152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.361159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.366398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.366423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.366431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.371554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.371574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.371582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.376785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.376805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.376812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.381917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.487 [2024-11-26 19:29:09.381938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.487 [2024-11-26 19:29:09.381945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.487 [2024-11-26 19:29:09.387134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.387154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.387161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.392368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.392388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.392396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.397651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.397679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.397687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.402903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.402925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.402933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.408157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.408177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.408184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.413427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.413446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.413454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.418597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.418617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.418625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.423908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.423927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.423934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.429117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.429136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.429144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.434311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.434331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.434338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.439464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.439485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.439492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.444678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.444698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.444705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.449914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.449934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.449942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.455106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.455127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.455138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.460360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.460381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.460389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.465598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.465618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.465626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.470872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.470893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.470900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.476069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.476089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.476096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.481253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.481273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.481281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.486396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.486415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.486423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.491525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.491545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.491552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.496700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.496720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.496727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.501911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.501935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.501943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.507075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.507095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.507103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.512289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.512309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.512316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.488 [2024-11-26 19:29:09.517455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.488 [2024-11-26 19:29:09.517474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.488 [2024-11-26 19:29:09.517482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.522718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.522737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.522745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.527968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.527988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.527996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.533098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.533119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.533126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.538289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.538309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.538317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.543437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.543458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.548702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.548722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.548729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.553843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.553864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.553871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.559104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.559124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.559132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.564336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.564356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.564363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.569524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.569544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.569552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.574796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.574817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.574824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.580140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.580161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.580169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.585499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.585520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.585527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.591188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.591209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.591221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.489 [2024-11-26 19:29:09.597104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.489 [2024-11-26 19:29:09.597125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.489 [2024-11-26 19:29:09.597134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.602614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.602634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.602642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.607838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.607859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.607867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.613010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.613031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.613039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.618165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.618187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.618195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.623370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.623390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.623398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.628578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.628598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.628606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.633847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.633867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.633875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.639087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.639108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.639116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.644322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.644342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.644350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.649751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.649776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.649783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.655174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.655195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.655203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.660599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.660620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.660628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.666044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.666064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.666072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.672451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.672472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.672480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.677985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.678006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.678013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.683297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.683318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.683329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.688787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.688808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.750 [2024-11-26 19:29:09.688815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.750 [2024-11-26 19:29:09.694177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.750 [2024-11-26 19:29:09.694197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.694205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.699413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.699434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.699442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.704939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.704960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.704968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.710561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.710580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.710588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.716823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.716844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.716853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.722490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.722512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.722519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.728188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.728209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.728217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.733729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.733755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.733762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.739096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.739117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.739125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.744570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.744591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.744598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.749840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.749860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.749868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.755236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.755257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.755265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.760692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.760713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.760721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.766134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.766154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.766161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.771753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.771774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.771782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.777487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.777508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.777515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.783159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.783179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.783186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.788748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.788769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.788777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.794254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.794274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.794282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.799819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.799840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.799848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.805146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.805166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.805174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.810503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.810523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.810531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.815823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.815843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.815851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.821091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.821111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.821119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.826219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.826240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.826254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.831855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.831876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.831883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.837424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.837445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.837453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.751 [2024-11-26 19:29:09.842784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.751 [2024-11-26 19:29:09.842805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.751 [2024-11-26 19:29:09.842814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.752 [2024-11-26 19:29:09.848231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.752 [2024-11-26 19:29:09.848253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.752 [2024-11-26 19:29:09.848261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.752 [2024-11-26 19:29:09.853609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.752 [2024-11-26 19:29:09.853630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.752 [2024-11-26 19:29:09.853638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.752 [2024-11-26 19:29:09.858885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:46.752 [2024-11-26 19:29:09.858906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.752 [2024-11-26 19:29:09.858913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.864109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.864130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.864137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.869305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.869326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.869334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.874615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.874642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.874649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.879782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.879803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.879811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.885084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.885106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.885114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.890415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.890437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.890445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.896196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.896217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.896224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.901627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.901648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.901656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.907041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.907062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.907070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.912483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.912504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.912511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.917894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.917915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.917923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.923780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.923800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.923808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.929223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.929244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.929252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.934523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.934544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.934551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.939851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.939871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.939879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.942656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.942681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.942690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.948200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.948220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.948228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.953566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.953586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.953593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.958990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.959012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.959020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.964799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.964819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.964830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.012 [2024-11-26 19:29:09.971980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.012 [2024-11-26 19:29:09.972001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.012 [2024-11-26 19:29:09.972009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:09.979434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:09.979455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:09.979463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:09.985865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:09.985886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:09.985894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:09.992320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:09.992340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:09.992348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:09.998194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:09.998214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:09.998222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.004544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.004565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.004574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.012562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.012584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.012592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.019886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.019910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.019920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.028296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.028319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.028328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.036028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.036051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.036060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.043779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.043806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.043814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.051149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.051171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.051180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.059599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.059623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.059631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.066350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.066372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.066381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.071795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.071816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.071824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.077341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.077361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.077369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.082793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.082813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.082827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.088521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.088542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.088550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.094070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.094091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.094099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.099651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.099677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.099686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.105124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.105143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.013 [2024-11-26 19:29:10.105151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.013 [2024-11-26 19:29:10.110537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.013 [2024-11-26 19:29:10.110558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.014 [2024-11-26 19:29:10.110566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.014 [2024-11-26 19:29:10.115925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.014 [2024-11-26 19:29:10.115945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.014 [2024-11-26 19:29:10.115953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.014 [2024-11-26 19:29:10.121300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.014 [2024-11-26 19:29:10.121320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.014 [2024-11-26 19:29:10.121327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.126630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.126651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.126659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.132022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.132046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.132054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.137477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.137497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.137505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.142817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.142837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.142846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.147360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.147381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.147389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.152726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.152747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.152755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.158095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.158116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.158124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.163532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.163553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.163561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.168965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.168985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.168993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.174379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.174399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.174407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.179852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.179873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.179881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.273 [2024-11-26 19:29:10.185277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b61a0) 00:27:47.273 [2024-11-26 19:29:10.185309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.273 [2024-11-26 19:29:10.185317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.273 5503.50 IOPS, 687.94 MiB/s 00:27:47.273 Latency(us) 00:27:47.273 [2024-11-26T18:29:10.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.274 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:47.274 nvme0n1 : 2.00 5503.06 687.88 0.00 0.00 2904.96 647.56 9611.95 00:27:47.274 [2024-11-26T18:29:10.388Z] =================================================================================================================== 00:27:47.274 [2024-11-26T18:29:10.388Z] Total : 5503.06 687.88 0.00 0.00 2904.96 647.56 9611.95 00:27:47.274 { 00:27:47.274 "results": [ 00:27:47.274 { 00:27:47.274 "job": "nvme0n1", 00:27:47.274 "core_mask": "0x2", 00:27:47.274 "workload": "randread", 00:27:47.274 "status": "finished", 00:27:47.274 "queue_depth": 16, 00:27:47.274 "io_size": 131072, 00:27:47.274 "runtime": 2.003249, 00:27:47.274 "iops": 5503.060278577451, 00:27:47.274 "mibps": 687.8825348221814, 00:27:47.274 "io_failed": 0, 00:27:47.274 "io_timeout": 0, 00:27:47.274 "avg_latency_us": 2904.957567212661, 00:27:47.274 "min_latency_us": 647.5580952380952, 00:27:47.274 "max_latency_us": 9611.946666666667 00:27:47.274 } 00:27:47.274 ], 00:27:47.274 "core_count": 1 00:27:47.274 } 00:27:47.274 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:47.274 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:47.274 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:47.274 | .driver_specific 00:27:47.274 | .nvme_error 00:27:47.274 | .status_code 00:27:47.274 | .command_transient_transport_error' 00:27:47.274 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 356 > 0 )) 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3894015 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3894015 ']' 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3894015 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3894015 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3894015' 00:27:47.533 killing process with pid 3894015 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3894015 00:27:47.533 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.533 00:27:47.533 Latency(us) 00:27:47.533 [2024-11-26T18:29:10.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.533 [2024-11-26T18:29:10.647Z] =================================================================================================================== 00:27:47.533 [2024-11-26T18:29:10.647Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3894015 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3894532 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3894532 /var/tmp/bperf.sock 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3894532 ']' 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.533 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.534 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:47.793 [2024-11-26 19:29:10.663966] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:47.793 [2024-11-26 19:29:10.664018] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894532 ] 00:27:47.793 [2024-11-26 19:29:10.719085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.793 [2024-11-26 19:29:10.759072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.793 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.793 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:47.793 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:47.793 19:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.052 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:48.052 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.052 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.052 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.052 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.052 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.311 nvme0n1 00:27:48.571 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:48.571 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.571 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.571 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.571 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:48.571 19:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:48.571 Running I/O for 2 seconds... 00:27:48.571 [2024-11-26 19:29:11.536016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.571 [2024-11-26 19:29:11.536151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.536178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.545569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.545694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.545717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.555070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.555186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.555205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.564564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.564687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.564705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.574012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.574133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.574150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.583511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.583630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.583647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.592994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.593110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.593129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.602399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.602517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.602534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.611883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.612001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.612019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.621306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.621421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.621438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.630766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.630881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.630898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.640365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.640480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.640497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.649781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.649897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.649914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.659212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.659327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.668625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.668748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.668769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.572 [2024-11-26 19:29:11.678127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.572 [2024-11-26 19:29:11.678260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.572 [2024-11-26 19:29:11.678278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.688035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.688154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.688172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.697495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.697614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.697631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.706911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.707026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.707043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.716330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.716444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.716461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.725746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.725889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.725906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.735179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.735308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.735326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.744628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.744749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.744767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.754042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.754179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.763500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.763618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.763634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.773039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.773152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.773169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.782436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.782551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.782568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.791870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.791984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.792001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.801514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.832 [2024-11-26 19:29:11.801645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.832 [2024-11-26 19:29:11.801662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.832 [2024-11-26 19:29:11.811003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.811139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.811156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.820477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.820591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.820608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.829930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.830043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.830060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.839334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.839445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.839462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.848722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.848831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.848850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.858138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.858253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.858270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.867709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.867827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.867845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.877227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.877357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.877375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.886746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.886878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.886895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.896227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.896359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.896377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.905718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.905851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.905868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.915283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.915414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.915437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.924768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.924885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.924902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.934243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.934354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.934371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:48.833 [2024-11-26 19:29:11.943803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:48.833 [2024-11-26 19:29:11.943922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.833 [2024-11-26 19:29:11.943940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:11.953378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:11.953511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:11.953529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:11.962840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:11.962955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:11.962973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:11.972345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:11.972474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:11.972492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:11.981816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:11.981933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:11.981950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:11.991247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:11.991361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:11.991379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.000655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.000782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.000800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.010066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.010183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.010200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.019499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.019630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.019648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.028927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.029040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.029058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.038360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.038473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.038490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.047767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.047881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.047899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.057477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.057592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.057609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.067012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.067145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.067162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.076484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.076615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.076633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.085947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.086060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.086077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.095347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.095461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.095479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.104757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.104873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.104890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.114213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.114327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.114345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.123690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.123806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.123823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.133092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.133205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.133222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.142510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.093 [2024-11-26 19:29:12.142640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.093 [2024-11-26 19:29:12.142658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.093 [2024-11-26 19:29:12.151993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.094 [2024-11-26 19:29:12.152109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.094 [2024-11-26 19:29:12.152125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.094 [2024-11-26 19:29:12.161399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.094 [2024-11-26 19:29:12.161512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.094 [2024-11-26 19:29:12.161531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.094 [2024-11-26 19:29:12.170790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.094 [2024-11-26 19:29:12.170904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.094 [2024-11-26 19:29:12.170922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.094 [2024-11-26 19:29:12.180178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.094 [2024-11-26 19:29:12.180293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.094 [2024-11-26 19:29:12.180310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.094 [2024-11-26 19:29:12.189589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.094 [2024-11-26 19:29:12.189710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.094 [2024-11-26 19:29:12.189728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.094 [2024-11-26 19:29:12.198988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.094 [2024-11-26 19:29:12.199101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.094 [2024-11-26 19:29:12.199118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.208665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.208790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.208807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.218260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.218374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.218391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.227644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.227767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.227785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.237051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.237169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.237186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.246435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.246552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.246568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.255839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.255954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.255971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.265235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.265349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.265365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.274639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.274761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.274795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.284156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.284268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.284286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.293564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.293681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.293699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.303169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.303285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.303304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.312834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.312952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.312970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.322287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.322419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.322437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.331805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.331937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.331955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.341279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.341391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.341408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.350732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.350847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.350865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.360130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.360244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.360261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.369528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.354 [2024-11-26 19:29:12.369642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.354 [2024-11-26 19:29:12.369659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.354 [2024-11-26 19:29:12.378936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.379051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.379068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.388333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.388447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.388464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.397710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.397826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.397843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.407119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.407233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.407253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.416500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.416614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.416631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.425916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.426029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.426046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.435319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.435433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.435450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.444746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.444860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.444877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.454131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.454243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.454260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.355 [2024-11-26 19:29:12.463650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.355 [2024-11-26 19:29:12.463774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.355 [2024-11-26 19:29:12.463791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.615 [2024-11-26 19:29:12.473295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.615 [2024-11-26 19:29:12.473411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.615 [2024-11-26 19:29:12.473428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.615 [2024-11-26 19:29:12.482728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.615 [2024-11-26 19:29:12.482844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.615 [2024-11-26 19:29:12.482861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.615 [2024-11-26 19:29:12.492131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.615 [2024-11-26 19:29:12.492247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.615 [2024-11-26 19:29:12.492265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.615 [2024-11-26 19:29:12.501535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.615 [2024-11-26 19:29:12.501649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.615 [2024-11-26 19:29:12.501667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.615 [2024-11-26 19:29:12.510949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.615 [2024-11-26 19:29:12.511081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.615 [2024-11-26 19:29:12.511098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.615 [2024-11-26 19:29:12.520421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.520536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.520553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 26917.00 IOPS, 105.14 MiB/s [2024-11-26T18:29:12.730Z] [2024-11-26 19:29:12.529823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.529938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.529955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.539218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.539331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.539348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.548705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.548821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.548839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.558105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.558235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.558252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.567780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.567898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.567915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.577226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.577344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.577361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.586706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.586821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.586838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.596153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.596269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.596286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.605606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.605729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.605746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.615053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.615186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.615203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.624485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.624602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.624619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.633882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.633995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.634012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.643296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.643410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.643427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.652709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.652825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.652851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.662157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.662272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.662290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.671641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.671784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.671802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.681341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.681457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.681475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.690875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.690992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.691009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.700288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.700401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.700418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.709697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.709814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.709830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.616 [2024-11-26 19:29:12.719117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.616 [2024-11-26 19:29:12.719233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.616 [2024-11-26 19:29:12.719249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.876 [2024-11-26 19:29:12.728777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.876 [2024-11-26 19:29:12.728893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.876 [2024-11-26 19:29:12.728911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.876 [2024-11-26 19:29:12.738311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.876 [2024-11-26 19:29:12.738429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.876 [2024-11-26 19:29:12.738447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.876 [2024-11-26 19:29:12.747724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.876 [2024-11-26 19:29:12.747840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.876 [2024-11-26 19:29:12.747857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.876 [2024-11-26 19:29:12.757103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.876 [2024-11-26 19:29:12.757219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.876 [2024-11-26 19:29:12.757235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.876 [2024-11-26 19:29:12.766561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.876 [2024-11-26 19:29:12.766680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.876 [2024-11-26 19:29:12.766697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.876 [2024-11-26 19:29:12.776233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.876 [2024-11-26 19:29:12.776351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.876 [2024-11-26 19:29:12.776368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.876 [2024-11-26 19:29:12.785731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.876 [2024-11-26 19:29:12.785874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.876 [2024-11-26 19:29:12.785891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.876 [2024-11-26 19:29:12.795131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.795247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.795264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.804562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.804682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.804700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.813951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.814092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.814124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.823668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.823809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.823826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.833183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.833315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.833333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.842693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.842826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.842843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.852156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.852269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.852286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.861585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.861712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.861730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.871042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.871157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.871174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.880464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.880581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.880598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.889872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.889985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.890002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.899285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.899400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.899420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.908711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.908828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.908845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.918109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.918226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.918243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.927522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.927635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.927652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.936970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.937085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.937102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.946383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.946498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.946514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.955799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.955917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.955934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.965194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.965325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.965343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.974704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.974836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.974853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.877 [2024-11-26 19:29:12.984207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:49.877 [2024-11-26 19:29:12.984328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.877 [2024-11-26 19:29:12.984345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:12.993891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:12.994011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:12.994029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.003306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.003423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.003440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.012711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.012825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.012842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.022168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.022282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.022299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.031573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.031689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.031706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.041038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.041150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.041167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.050457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.050571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.050588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.059857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.059974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.059991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.069238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.069351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.069368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.078920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.079037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.079055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.088365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.088496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.088512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.097852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.097968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.097985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.107261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.107394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.107411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.116744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.116858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.116875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.126147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.126260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.126277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.135532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.135645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.135662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.144933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.145047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.145067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.154328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.154441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.154458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.163735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.163849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.163866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 19:29:13.173164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.138 [2024-11-26 19:29:13.173278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 19:29:13.173295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 19:29:13.182569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.139 [2024-11-26 19:29:13.182685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 19:29:13.182701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 19:29:13.191978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.139 [2024-11-26 19:29:13.192091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 19:29:13.192108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 19:29:13.201342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.139 [2024-11-26 19:29:13.201458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 19:29:13.201476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 19:29:13.210776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.139 [2024-11-26 19:29:13.210889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 19:29:13.210906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 19:29:13.220140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.139 [2024-11-26 19:29:13.220254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 19:29:13.220271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 19:29:13.229546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.139 [2024-11-26 19:29:13.229683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 19:29:13.229700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 19:29:13.238983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.139 [2024-11-26 19:29:13.239096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 19:29:13.239112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 19:29:13.248518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.139 [2024-11-26 19:29:13.248635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 19:29:13.248652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.258142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.258255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.258273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.267529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.267642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.267658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.277004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.277118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.277135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.286429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.286543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.286560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.295820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.295959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.295975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.305419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.305535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.305552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.314802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.314916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.314933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.324187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.324300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.324317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.333848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.333964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.333982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.343338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.343455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.343472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.352795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.352926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.352943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.362274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.362403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.362420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.371733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.371849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.371865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.381121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.381234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.381250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.390506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.390621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.390641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.399890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.400002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.400021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.409297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.409413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.409431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.418688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.418806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.418823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.428083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.428197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.428214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.437510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.437624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.437641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.446902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.447015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.447031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.456295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.456409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.456426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.465697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.465829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.465847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.475153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.475276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.475293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.484560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.484681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.484699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.399 [2024-11-26 19:29:13.493949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.399 [2024-11-26 19:29:13.494062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.399 [2024-11-26 19:29:13.494079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.400 [2024-11-26 19:29:13.503343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.400 [2024-11-26 19:29:13.503459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.400 [2024-11-26 19:29:13.503476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.659 [2024-11-26 19:29:13.512966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.659 [2024-11-26 19:29:13.513098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.659 [2024-11-26 19:29:13.513115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.659 [2024-11-26 19:29:13.522488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.659 [2024-11-26 19:29:13.522601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.659 [2024-11-26 19:29:13.522618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.659 26977.50 IOPS, 105.38 MiB/s [2024-11-26T18:29:13.773Z] [2024-11-26 19:29:13.531896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882180) with pdu=0x200016efda78 00:27:50.659 [2024-11-26 19:29:13.532008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.659 [2024-11-26 19:29:13.532026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.659 00:27:50.659 Latency(us) 00:27:50.659 [2024-11-26T18:29:13.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.659 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:50.659 nvme0n1 : 2.01 26980.68 105.39 0.00 0.00 4736.12 3401.63 9861.61 00:27:50.659 [2024-11-26T18:29:13.773Z] =================================================================================================================== 00:27:50.659 [2024-11-26T18:29:13.773Z] Total : 26980.68 105.39 0.00 0.00 4736.12 3401.63 9861.61 00:27:50.659 { 00:27:50.659 "results": [ 00:27:50.659 { 00:27:50.659 "job": "nvme0n1", 00:27:50.659 "core_mask": "0x2", 00:27:50.659 "workload": "randwrite", 00:27:50.659 "status": "finished", 00:27:50.659 "queue_depth": 128, 00:27:50.659 "io_size": 4096, 00:27:50.659 "runtime": 2.005991, 00:27:50.659 "iops": 26980.67937493239, 00:27:50.659 "mibps": 105.39327880832965, 00:27:50.659 "io_failed": 0, 00:27:50.659 "io_timeout": 0, 00:27:50.659 "avg_latency_us": 4736.120172693064, 00:27:50.659 "min_latency_us": 3401.630476190476, 00:27:50.659 "max_latency_us": 9861.60761904762 00:27:50.659 } 00:27:50.659 ], 00:27:50.659 "core_count": 1 00:27:50.659 } 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:50.659 | .driver_specific 00:27:50.659 | .nvme_error 00:27:50.659 | .status_code 00:27:50.659 | .command_transient_transport_error' 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3894532 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3894532 ']' 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3894532 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.659 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3894532 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3894532' 00:27:50.919 killing process with pid 3894532 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3894532 00:27:50.919 Received shutdown signal, test time was about 2.000000 seconds 00:27:50.919 00:27:50.919 Latency(us) 00:27:50.919 [2024-11-26T18:29:14.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.919 [2024-11-26T18:29:14.033Z] =================================================================================================================== 00:27:50.919 [2024-11-26T18:29:14.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3894532 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3895009 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3895009 /var/tmp/bperf.sock 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3895009 ']' 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:50.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.919 19:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.919 [2024-11-26 19:29:14.013076] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:27:50.919 [2024-11-26 19:29:14.013125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895009 ] 00:27:50.919 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:50.919 Zero copy mechanism will not be used. 00:27:51.178 [2024-11-26 19:29:14.087664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.178 [2024-11-26 19:29:14.127374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.178 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.178 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:51.178 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:51.178 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:51.437 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:51.437 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.437 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.438 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.438 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:51.438 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:51.696 nvme0n1 00:27:51.696 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:51.696 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.696 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.696 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.696 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:51.696 19:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:51.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:51.956 Zero copy mechanism will not be used. 00:27:51.956 Running I/O for 2 seconds... 00:27:51.956 [2024-11-26 19:29:14.819909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.820043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.820073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.825404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.825548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.825572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.831238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.831333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.831354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.836941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.837072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.837090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.842651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.842745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.842764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.849462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.849581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.849599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.855858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.855936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.855955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.862122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.862204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.862222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.868185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.868255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.868274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.872826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.872886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.872907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.877563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.877653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.877676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.882731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.882903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.882921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.889430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.889571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.889589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.895470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.895568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.895587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.901818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.901883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.901902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.907243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.907299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.956 [2024-11-26 19:29:14.907317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.956 [2024-11-26 19:29:14.912299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.956 [2024-11-26 19:29:14.912353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.912370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.917381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.917437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.917455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.922404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.922458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.922476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.927762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.927817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.927834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.933348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.933414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.933432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.938301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.938356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.938374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.943355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.943424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.943442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.948348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.948399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.948417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.953852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.953904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.953922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.959122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.959177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.959196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.963990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.964048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.964066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.968705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.968768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.968785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.973691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.973755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.973773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.978542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.978615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.978633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.983506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.983567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.983584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.989039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.989097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.989115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:14.993875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:14.993979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:14.993996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.000320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.000377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.000395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.005357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.005413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.005430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.011125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.011213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.011235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.018426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.018577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.018595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.024922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.025133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.025153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.031075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.031315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.031334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.038064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.038340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.038359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.044067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.044314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.044333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.050355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.050638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.050656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.056730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.056968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.957 [2024-11-26 19:29:15.056987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.957 [2024-11-26 19:29:15.063161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:51.957 [2024-11-26 19:29:15.063411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.958 [2024-11-26 19:29:15.063432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.069276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.069544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.069564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.074717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.074977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.074996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.079757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.080007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.080027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.084414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.084676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.084697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.089792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.090073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.090092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.095717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.096080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.096099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.101873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.102258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.102278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.108342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.108720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.108739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.114782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.115062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.115081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.120756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.121071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.121092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.126832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.127146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.127165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.132724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.133028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.133048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.138662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.138979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.138998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.145214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.145451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.219 [2024-11-26 19:29:15.145470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.219 [2024-11-26 19:29:15.151958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.219 [2024-11-26 19:29:15.152292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.152311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.157426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.157653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.157677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.162296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.162533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.162552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.166564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.166828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.166851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.170784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.171047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.171066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.175080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.175315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.175334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.179282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.179511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.179529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.183462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.183710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.183729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.187667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.187902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.187921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.191828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.192079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.192098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.196002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.196245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.196264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.200151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.200383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.200402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.204306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.204546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.204565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.208443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.208698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.208717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.212557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.212805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.212824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.216708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.216971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.216990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.221012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.221255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.221274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.225514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.225753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.225772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.230263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.230489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.230508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.235120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.235367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.235386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.240498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.240740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.240758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.245216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.245461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.245480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.249648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.249889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.249908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.254028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.254257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.254276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.258283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.258517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.258536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.262731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.262963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.262982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.267138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.267365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.267384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.220 [2024-11-26 19:29:15.271614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.220 [2024-11-26 19:29:15.271878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.220 [2024-11-26 19:29:15.271897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.276054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.276287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.276306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.280468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.280706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.280728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.285037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.285303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.285323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.290017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.290240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.290258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.295273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.295500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.295519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.300347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.300575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.300594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.305638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.305876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.305896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.310133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.310370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.310388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.314526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.314772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.314790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.318739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.318978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.318997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.322830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.323075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.323094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.221 [2024-11-26 19:29:15.327298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.221 [2024-11-26 19:29:15.327538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.221 [2024-11-26 19:29:15.327563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.331533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.331780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.331799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.335950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.336186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.336205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.340182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.340425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.340444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.344432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.344694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.344714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.348860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.349120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.349140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.353528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.353795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.353815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.358915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.359150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.359169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.363563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.363808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.363828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.482 [2024-11-26 19:29:15.368036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.482 [2024-11-26 19:29:15.368289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.482 [2024-11-26 19:29:15.368308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.372537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.372781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.372800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.376903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.377138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.377157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.381084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.381326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.381345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.385587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.385854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.385873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.390200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.390449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.390467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.395004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.395241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.395260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.399925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.400168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.400191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.404883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.405124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.405142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.409781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.410014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.410033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.414147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.414377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.414396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.418645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.418882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.418902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.422833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.423075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.423093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.427074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.427311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.427329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.431471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.431721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.431739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.435860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.436125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.436144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.440369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.440610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.440629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.444487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.444732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.444751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.448762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.448995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.449014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.453231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.453464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.453482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.457906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.458149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.458168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.462741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.462975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.462994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.467870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.468105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.468123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.472771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.473031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.473049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.477237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.477472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.477490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.481708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.481939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.481958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.486677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.486905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.486924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.483 [2024-11-26 19:29:15.491357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.483 [2024-11-26 19:29:15.491615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.483 [2024-11-26 19:29:15.491633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.496137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.496378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.496397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.500955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.501180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.501199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.505427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.505660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.505684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.510782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.511133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.517002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.517262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.517282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.521894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.522153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.522174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.526685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.526922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.526941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.531370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.531603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.531622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.536116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.536366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.536385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.540788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.541009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.541028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.545471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.545749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.545768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.550269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.550501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.550519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.555684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.555916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.555934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.561520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.561782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.561801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.567018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.567254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.567272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.571937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.572199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.572218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.576775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.577009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.577028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.581634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.581895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.581913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.587174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.587283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.587301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.484 [2024-11-26 19:29:15.591458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.484 [2024-11-26 19:29:15.591698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.484 [2024-11-26 19:29:15.591717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.745 [2024-11-26 19:29:15.595923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.745 [2024-11-26 19:29:15.596178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.745 [2024-11-26 19:29:15.596197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.745 [2024-11-26 19:29:15.600229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.745 [2024-11-26 19:29:15.600479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.745 [2024-11-26 19:29:15.600498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.745 [2024-11-26 19:29:15.604759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.745 [2024-11-26 19:29:15.605004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.745 [2024-11-26 19:29:15.605023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.745 [2024-11-26 19:29:15.609208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.745 [2024-11-26 19:29:15.609458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.745 [2024-11-26 19:29:15.609478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.745 [2024-11-26 19:29:15.613645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.745 [2024-11-26 19:29:15.613918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.745 [2024-11-26 19:29:15.613939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.618213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.618449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.618469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.622635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.622875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.622894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.627093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.627341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.627360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.631474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.631738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.631757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.635699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.635943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.635962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.639909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.640165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.640195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.644089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.644331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.644353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.648269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.648514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.648533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.652416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.652655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.652681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.656566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.656802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.656821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.660710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.660949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.660968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.664832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.665073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.665092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.668940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.669199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.669218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.673092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.673361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.673380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.677223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.677468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.677487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.681341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.681579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.681601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.685454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.685703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.685721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.689599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.689844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.689864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.693724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.693967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.693986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.697814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.698056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.698074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.701926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.702161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.702180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.746 [2024-11-26 19:29:15.706059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.746 [2024-11-26 19:29:15.706303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.746 [2024-11-26 19:29:15.706321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.710194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.710438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.710457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.714345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.714579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.714598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.718660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.718925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.718944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.723772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.724015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.724034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.728596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.728835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.728854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.733007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.733243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.733262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.737554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.737801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.737820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.742016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.742271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.742290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.746627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.746877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.746897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.751532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.751773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.751791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.756084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.756317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.756336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.760442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.760719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.760739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.764817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.765066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.765084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.768972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.769221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.769241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.773660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.773906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.773924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.778159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.778404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.778423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.782903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.783134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.783153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.787802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.788038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.788058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.792653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.792917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.792949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.798032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.798271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.798297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.803250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.803504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.803524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.807837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.747 [2024-11-26 19:29:15.808108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.747 [2024-11-26 19:29:15.808128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.747 [2024-11-26 19:29:15.812230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.812471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.812490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.748 6314.00 IOPS, 789.25 MiB/s [2024-11-26T18:29:15.862Z] [2024-11-26 19:29:15.817843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.818055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.818075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.748 [2024-11-26 19:29:15.822389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.822623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.822641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.748 [2024-11-26 19:29:15.826922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.827164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.827184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.748 [2024-11-26 19:29:15.831215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.831472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.831491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.748 [2024-11-26 19:29:15.835484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.835744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.835764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.748 [2024-11-26 19:29:15.839992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.840234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.840253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.748 [2024-11-26 19:29:15.844395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.844640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.844660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.748 [2024-11-26 19:29:15.848905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.849160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.849179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.748 [2024-11-26 19:29:15.853043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:52.748 [2024-11-26 19:29:15.853287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.748 [2024-11-26 19:29:15.853306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.009 [2024-11-26 19:29:15.857151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.857406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.857425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.861230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.861495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.861513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.865320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.865572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.865590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.869432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.869686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.869706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.873482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.873741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.873760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.877580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.877834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.877853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.881832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.882080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.882098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.886400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.886665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.886691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.891510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.891758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.891777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.896262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.896499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.896518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.900636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.900879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.900898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.904881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.905119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.905138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.909169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.909403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.909422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.913641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.913876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.913899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.917910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.918164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.918194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.922035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.922276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.922295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.926419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.926666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.926691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.930970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.931213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.931232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.935839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.936093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.936112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.940807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.941036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.941055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.945901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.946120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.946139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.951332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.951568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.951587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.955749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.955972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.955991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.960178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.960438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.960456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.964430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.964681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.964699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.010 [2024-11-26 19:29:15.968680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.010 [2024-11-26 19:29:15.968937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.010 [2024-11-26 19:29:15.968957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:15.973089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:15.973357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:15.973375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:15.977483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:15.977733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:15.977752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:15.981876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:15.982110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:15.982129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:15.986007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:15.986243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:15.986261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:15.990115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:15.990352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:15.990370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:15.994164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:15.994419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:15.994438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:15.998247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:15.998492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:15.998511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.002255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.002497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.002516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.006261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.006494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.006513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.010254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.010504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.010523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.014279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.014527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.014548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.018261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.018522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.018541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.022482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.022725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.022743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.026594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.026836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.026858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.030724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.030968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.030986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.034784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.035026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.035044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.038892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.039144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.039163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.042969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.043212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.043231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.047094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.047321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.047340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.051220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.051459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.051478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.055333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.055575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.055594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.059486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.059736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.059755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.063600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.063851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.063869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.011 [2024-11-26 19:29:16.067745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.011 [2024-11-26 19:29:16.067984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.011 [2024-11-26 19:29:16.068002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.071839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.072097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.072116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.075997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.076246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.076265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.080105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.080353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.080371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.084194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.084438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.084457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.088268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.088530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.088554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.092422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.092681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.092700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.096537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.096786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.096805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.100734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.100964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.100984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.104911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.105185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.105204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.109040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.109334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.109354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.113172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.113441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.113460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.012 [2024-11-26 19:29:16.117429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.012 [2024-11-26 19:29:16.117685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.012 [2024-11-26 19:29:16.117705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.121612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.121855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.121874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.125704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.125966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.125985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.129873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.130125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.130144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.134060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.134294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.134316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.138190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.138461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.138480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.142302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.142560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.142578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.146380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.146609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.146627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.150492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.150729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.150748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.154577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.154830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.154849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.158631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.158883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.158901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.162732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.162972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.162990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.166720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.166966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.166984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.170828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.171061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.171080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.174923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.175159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.175178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.178948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.179194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.179212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.183229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.183474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.183492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.188323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.188662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.188686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.194124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.194454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.194473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.199806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.200055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.200073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.205474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.205759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.205777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.211329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.211594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.211612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.217442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.217736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.217756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.223126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.223423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.223442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.228606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.228930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.228949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.234332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.234638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.234657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.239792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.240073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.240091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.245007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.245264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.245283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.250355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.250610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.250629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.255773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.255951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.255969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.261436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.261608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.261629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.267271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.267467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.267485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.272753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.272934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.272969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.277747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.277902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.277920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.281931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.282078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.282096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.286115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.286345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.286364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.290428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.273 [2024-11-26 19:29:16.290604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.273 [2024-11-26 19:29:16.290621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.273 [2024-11-26 19:29:16.294697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.294893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.294912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.298832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.298990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.299008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.303037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.303177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.303195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.307326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.307549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.307567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.311396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.311557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.311575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.315919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.316110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.316133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.320476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.320642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.320659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.325640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.325819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.325836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.329439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.329609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.329627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.333202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.333373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.333390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.336979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.337154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.340777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.340968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.340986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.344627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.344804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.344821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.348441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.348606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.348624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.352238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.352419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.352436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.356045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.356219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.356236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.359815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.360001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.360019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.363653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.363848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.363867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.367438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.367619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.367637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.371279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.371472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.371494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.375148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.375312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.375330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.378867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.379049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.379082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.274 [2024-11-26 19:29:16.382728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.274 [2024-11-26 19:29:16.382909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.274 [2024-11-26 19:29:16.382927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.386563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.386745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.386763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.390388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.390561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.390579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.394210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.394398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.394415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.398034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.398202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.398220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.401822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.402011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.405559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.405761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.409336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.409500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.409517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.413095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.413268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.413285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.416857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.417035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.417052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.420621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.420807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.420824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.424359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.424524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.424541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.428305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.428574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.428593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.433381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.433650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.433674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.437998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.438186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.438204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.442134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.442315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.442333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.446173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.446340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.446357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.450161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.450334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.450351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.454242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.454429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.454446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.458121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.458272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.458289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.462410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.462603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.536 [2024-11-26 19:29:16.462620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.536 [2024-11-26 19:29:16.466459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.536 [2024-11-26 19:29:16.466650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.466677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.470708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.470871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.470889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.474967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.475127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.475148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.478915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.479118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.479137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.483193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.483356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.483374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.487092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.487254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.487271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.490977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.491127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.491144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.495434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.495576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.495593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.500420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.500605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.500622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.505777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.505908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.505925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.511237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.511433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.511451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.517389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.517554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.517575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.522531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.522747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.522767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.527872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.528053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.528071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.532043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.532191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.532209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.535945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.536116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.536132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.539939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.540114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.540130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.543806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.544003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.544032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.547814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.547983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.548000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.552511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.552663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.552686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.557878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.558255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.558305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.563486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.563630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.563650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.568262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.568406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.537 [2024-11-26 19:29:16.568424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.537 [2024-11-26 19:29:16.573243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.537 [2024-11-26 19:29:16.573385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.573403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.577747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.577913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.577930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.582246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.582399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.582417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.586769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.586930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.586948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.591283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.591435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.591453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.596653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.597035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.597055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.601377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.601556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.601574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.605813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.605951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.605969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.610322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.610477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.610494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.614476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.614630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.614648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.618689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.618820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.618838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.623519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.623679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.623697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.628577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.628763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.628780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.632708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.632862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.632880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.636676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.636860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.636882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.640581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.640766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.640783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.538 [2024-11-26 19:29:16.644457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.538 [2024-11-26 19:29:16.644629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.538 [2024-11-26 19:29:16.644646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.648339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.648502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.648520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.652229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.652403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.652420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.656140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.656318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.656336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.660004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.660180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.660198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.663808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.663974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.663991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.667638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.667814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.667832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.671438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.671599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.671616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.675311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.675470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.675487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.679181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.679353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.679371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.682997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.683167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.683185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.686817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.686988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.687006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.690588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.690775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.690793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.694417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.694568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.694586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.698219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.698384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.698401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.702053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.702220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.702238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.706761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.706913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.706930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.711225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.711359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.711377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.715486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.715617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.715634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.719444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.719587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.719605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.723395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.723573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.723590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.799 [2024-11-26 19:29:16.727338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.799 [2024-11-26 19:29:16.727530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.799 [2024-11-26 19:29:16.727548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.731088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.731257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.731274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.734908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.735079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.735096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.739122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.739303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.739324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.743684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.743763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.743781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.747728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.747878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.747895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.751868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.752034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.752051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.755699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.755850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.755867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.759553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.759726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.759743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.763489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.763660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.763684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.767256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.767414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.767431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.771583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.771735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.771753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.777106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.777277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.777294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.781321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.781476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.781492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.785451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.785577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.785594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.789436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.789610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.789627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.793305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.793476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.793492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.797335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.797505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.797522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.801715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.801875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.801892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.806276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.806442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.806459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.810879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.811029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.811046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.800 [2024-11-26 19:29:16.814875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.800 [2024-11-26 19:29:16.815044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.800 [2024-11-26 19:29:16.815061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.800 6754.50 IOPS, 844.31 MiB/s [2024-11-26T18:29:16.915Z] [2024-11-26 19:29:16.819828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x882660) with pdu=0x200016eff3c8 00:27:53.801 [2024-11-26 19:29:16.819891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.801 [2024-11-26 19:29:16.819909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.801 00:27:53.801 Latency(us) 00:27:53.801 [2024-11-26T18:29:16.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.801 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:53.801 nvme0n1 : 2.00 6752.10 844.01 0.00 0.00 2365.52 1771.03 9799.19 00:27:53.801 [2024-11-26T18:29:16.915Z] =================================================================================================================== 00:27:53.801 [2024-11-26T18:29:16.915Z] Total : 6752.10 844.01 0.00 0.00 2365.52 1771.03 9799.19 00:27:53.801 { 00:27:53.801 "results": [ 00:27:53.801 { 00:27:53.801 "job": "nvme0n1", 00:27:53.801 "core_mask": "0x2", 00:27:53.801 "workload": "randwrite", 00:27:53.801 "status": "finished", 00:27:53.801 "queue_depth": 16, 00:27:53.801 "io_size": 131072, 00:27:53.801 "runtime": 2.00308, 00:27:53.801 "iops": 6752.101763284541, 00:27:53.801 "mibps": 844.0127204105677, 00:27:53.801 "io_failed": 0, 00:27:53.801 "io_timeout": 0, 00:27:53.801 "avg_latency_us": 2365.5242562450494, 00:27:53.801 "min_latency_us": 1771.032380952381, 00:27:53.801 "max_latency_us": 9799.192380952381 00:27:53.801 } 00:27:53.801 ], 00:27:53.801 "core_count": 1 00:27:53.801 } 00:27:53.801 19:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:53.801 19:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:53.801 19:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:53.801 | .driver_specific 00:27:53.801 | .nvme_error 00:27:53.801 | .status_code 00:27:53.801 | .command_transient_transport_error' 00:27:53.801 19:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 437 > 0 )) 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3895009 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3895009 ']' 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3895009 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3895009 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3895009' 00:27:54.060 killing process with pid 3895009 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3895009 00:27:54.060 Received shutdown signal, test time was about 2.000000 seconds 00:27:54.060 00:27:54.060 Latency(us) 00:27:54.060 [2024-11-26T18:29:17.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.060 [2024-11-26T18:29:17.174Z] =================================================================================================================== 00:27:54.060 [2024-11-26T18:29:17.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.060 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3895009 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3893343 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3893343 ']' 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3893343 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3893343 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3893343' 00:27:54.320 killing process with pid 3893343 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3893343 00:27:54.320 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3893343 00:27:54.580 00:27:54.580 real 0m13.853s 00:27:54.580 user 0m26.377s 00:27:54.580 sys 0m4.656s 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.580 ************************************ 00:27:54.580 END TEST nvmf_digest_error 00:27:54.580 ************************************ 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:54.580 rmmod nvme_tcp 00:27:54.580 rmmod nvme_fabrics 00:27:54.580 rmmod nvme_keyring 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3893343 ']' 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3893343 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3893343 ']' 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3893343 00:27:54.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3893343) - No such process 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3893343 is not found' 00:27:54.580 Process with pid 3893343 is not found 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.580 19:29:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:57.116 00:27:57.116 real 0m36.906s 00:27:57.116 user 0m55.888s 00:27:57.116 sys 0m13.671s 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.116 ************************************ 00:27:57.116 END TEST nvmf_digest 00:27:57.116 ************************************ 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.116 ************************************ 00:27:57.116 START TEST nvmf_bdevperf 00:27:57.116 ************************************ 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:57.116 * Looking for test storage... 00:27:57.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:57.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.116 --rc genhtml_branch_coverage=1 00:27:57.116 --rc genhtml_function_coverage=1 00:27:57.116 --rc genhtml_legend=1 00:27:57.116 --rc geninfo_all_blocks=1 00:27:57.116 --rc geninfo_unexecuted_blocks=1 00:27:57.116 00:27:57.116 ' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:57.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.116 --rc genhtml_branch_coverage=1 00:27:57.116 --rc genhtml_function_coverage=1 00:27:57.116 --rc genhtml_legend=1 00:27:57.116 --rc geninfo_all_blocks=1 00:27:57.116 --rc geninfo_unexecuted_blocks=1 00:27:57.116 00:27:57.116 ' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:57.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.116 --rc genhtml_branch_coverage=1 00:27:57.116 --rc genhtml_function_coverage=1 00:27:57.116 --rc genhtml_legend=1 00:27:57.116 --rc geninfo_all_blocks=1 00:27:57.116 --rc geninfo_unexecuted_blocks=1 00:27:57.116 00:27:57.116 ' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:57.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.116 --rc genhtml_branch_coverage=1 00:27:57.116 --rc genhtml_function_coverage=1 00:27:57.116 --rc genhtml_legend=1 00:27:57.116 --rc geninfo_all_blocks=1 00:27:57.116 --rc geninfo_unexecuted_blocks=1 00:27:57.116 00:27:57.116 ' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.116 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:57.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.117 19:29:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:03.688 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:03.688 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:03.689 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:03.689 Found net devices under 0000:86:00.0: cvl_0_0 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:03.689 Found net devices under 0000:86:00.1: cvl_0_1 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:03.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:28:03.689 00:28:03.689 --- 10.0.0.2 ping statistics --- 00:28:03.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.689 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:28:03.689 00:28:03.689 --- 10.0.0.1 ping statistics --- 00:28:03.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.689 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3899056 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3899056 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3899056 ']' 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.689 19:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 [2024-11-26 19:29:25.905973] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:28:03.689 [2024-11-26 19:29:25.906020] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.689 [2024-11-26 19:29:25.984117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:03.689 [2024-11-26 19:29:26.025842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.689 [2024-11-26 19:29:26.025882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.689 [2024-11-26 19:29:26.025889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.689 [2024-11-26 19:29:26.025895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.689 [2024-11-26 19:29:26.025900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.689 [2024-11-26 19:29:26.027303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.689 [2024-11-26 19:29:26.027408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.689 [2024-11-26 19:29:26.027410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.689 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.689 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:03.689 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:03.689 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.689 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.689 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.690 [2024-11-26 19:29:26.163811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.690 Malloc0 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.690 [2024-11-26 19:29:26.230258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.690 { 00:28:03.690 "params": { 00:28:03.690 "name": "Nvme$subsystem", 00:28:03.690 "trtype": "$TEST_TRANSPORT", 00:28:03.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.690 "adrfam": "ipv4", 00:28:03.690 "trsvcid": "$NVMF_PORT", 00:28:03.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.690 "hdgst": ${hdgst:-false}, 00:28:03.690 "ddgst": ${ddgst:-false} 00:28:03.690 }, 00:28:03.690 "method": "bdev_nvme_attach_controller" 00:28:03.690 } 00:28:03.690 EOF 00:28:03.690 )") 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:03.690 19:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:03.690 "params": { 00:28:03.690 "name": "Nvme1", 00:28:03.690 "trtype": "tcp", 00:28:03.690 "traddr": "10.0.0.2", 00:28:03.690 "adrfam": "ipv4", 00:28:03.690 "trsvcid": "4420", 00:28:03.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.690 "hdgst": false, 00:28:03.690 "ddgst": false 00:28:03.690 }, 00:28:03.690 "method": "bdev_nvme_attach_controller" 00:28:03.690 }' 00:28:03.690 [2024-11-26 19:29:26.280246] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:28:03.690 [2024-11-26 19:29:26.280287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899254 ] 00:28:03.690 [2024-11-26 19:29:26.353437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.690 [2024-11-26 19:29:26.395296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.690 Running I/O for 1 seconds... 00:28:04.626 11275.00 IOPS, 44.04 MiB/s 00:28:04.626 Latency(us) 00:28:04.626 [2024-11-26T18:29:27.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.626 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:04.626 Verification LBA range: start 0x0 length 0x4000 00:28:04.626 Nvme1n1 : 1.01 11287.15 44.09 0.00 0.00 11296.43 2324.97 12358.22 00:28:04.626 [2024-11-26T18:29:27.740Z] =================================================================================================================== 00:28:04.626 [2024-11-26T18:29:27.740Z] Total : 11287.15 44.09 0.00 0.00 11296.43 2324.97 12358.22 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3899491 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:04.885 { 00:28:04.885 "params": { 00:28:04.885 "name": "Nvme$subsystem", 00:28:04.885 "trtype": "$TEST_TRANSPORT", 00:28:04.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.885 "adrfam": "ipv4", 00:28:04.885 "trsvcid": "$NVMF_PORT", 00:28:04.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.885 "hdgst": ${hdgst:-false}, 00:28:04.885 "ddgst": ${ddgst:-false} 00:28:04.885 }, 00:28:04.885 "method": "bdev_nvme_attach_controller" 00:28:04.885 } 00:28:04.885 EOF 00:28:04.885 )") 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:04.885 19:29:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:04.885 "params": { 00:28:04.885 "name": "Nvme1", 00:28:04.885 "trtype": "tcp", 00:28:04.885 "traddr": "10.0.0.2", 00:28:04.885 "adrfam": "ipv4", 00:28:04.885 "trsvcid": "4420", 00:28:04.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:04.885 "hdgst": false, 00:28:04.885 "ddgst": false 00:28:04.885 }, 00:28:04.885 "method": "bdev_nvme_attach_controller" 00:28:04.885 }' 00:28:04.885 [2024-11-26 19:29:27.929481] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:28:04.885 [2024-11-26 19:29:27.929528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899491 ] 00:28:05.145 [2024-11-26 19:29:28.003397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.145 [2024-11-26 19:29:28.041702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.403 Running I/O for 15 seconds... 00:28:07.274 11216.00 IOPS, 43.81 MiB/s [2024-11-26T18:29:30.959Z] 11295.00 IOPS, 44.12 MiB/s [2024-11-26T18:29:30.959Z] 19:29:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3899056 00:28:07.845 19:29:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:07.845 [2024-11-26 19:29:30.897094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-11-26 19:29:30.897633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-11-26 19:29:30.897639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.897993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.897999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-11-26 19:29:30.898218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-11-26 19:29:30.898225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-11-26 19:29:30.898436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-11-26 19:29:30.898450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-11-26 19:29:30.898465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-11-26 19:29:30.898480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-11-26 19:29:30.898495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-11-26 19:29:30.898509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-11-26 19:29:30.898523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-11-26 19:29:30.898913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-11-26 19:29:30.898920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.898927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.898935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.898942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.898950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.898960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.898968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.898975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.898983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.898989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.898997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-11-26 19:29:30.899133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-11-26 19:29:30.899149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.899157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b606c0 is same with the state(6) to be set 00:28:07.848 [2024-11-26 19:29:30.899165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:07.848 [2024-11-26 19:29:30.899170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:07.848 [2024-11-26 19:29:30.899177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:28:07.848 [2024-11-26 19:29:30.899184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-11-26 19:29:30.901981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.848 [2024-11-26 19:29:30.902033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:07.848 [2024-11-26 19:29:30.902637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.848 [2024-11-26 19:29:30.902652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:07.848 [2024-11-26 19:29:30.902660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:07.848 [2024-11-26 19:29:30.902842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:07.848 [2024-11-26 19:29:30.903017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.848 [2024-11-26 19:29:30.903024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.848 [2024-11-26 19:29:30.903033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.848 [2024-11-26 19:29:30.903040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.848 [2024-11-26 19:29:30.915271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.848 [2024-11-26 19:29:30.915690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.848 [2024-11-26 19:29:30.915708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:07.848 [2024-11-26 19:29:30.915716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:07.848 [2024-11-26 19:29:30.915890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:07.848 [2024-11-26 19:29:30.916062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.848 [2024-11-26 19:29:30.916071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.848 [2024-11-26 19:29:30.916077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.848 [2024-11-26 19:29:30.916084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.848 [2024-11-26 19:29:30.928200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.848 [2024-11-26 19:29:30.928601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.848 [2024-11-26 19:29:30.928617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:07.848 [2024-11-26 19:29:30.928627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:07.848 [2024-11-26 19:29:30.928816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:07.848 [2024-11-26 19:29:30.929003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.848 [2024-11-26 19:29:30.929011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.848 [2024-11-26 19:29:30.929018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.848 [2024-11-26 19:29:30.929025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.848 [2024-11-26 19:29:30.940987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.848 [2024-11-26 19:29:30.941411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.848 [2024-11-26 19:29:30.941428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:07.848 [2024-11-26 19:29:30.941435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:07.848 [2024-11-26 19:29:30.941604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:07.848 [2024-11-26 19:29:30.941780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.848 [2024-11-26 19:29:30.941789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.848 [2024-11-26 19:29:30.941795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.848 [2024-11-26 19:29:30.941801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.109 [2024-11-26 19:29:30.954118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.109 [2024-11-26 19:29:30.954525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-26 19:29:30.954542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.109 [2024-11-26 19:29:30.954549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.109 [2024-11-26 19:29:30.954729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.109 [2024-11-26 19:29:30.954903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.109 [2024-11-26 19:29:30.954911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.109 [2024-11-26 19:29:30.954917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.109 [2024-11-26 19:29:30.954923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.109 [2024-11-26 19:29:30.967179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.109 [2024-11-26 19:29:30.967584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-26 19:29:30.967600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.109 [2024-11-26 19:29:30.967607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.109 [2024-11-26 19:29:30.967783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.109 [2024-11-26 19:29:30.967955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.109 [2024-11-26 19:29:30.967963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.109 [2024-11-26 19:29:30.967969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.109 [2024-11-26 19:29:30.967975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.109 [2024-11-26 19:29:30.979933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.109 [2024-11-26 19:29:30.980369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-26 19:29:30.980386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.109 [2024-11-26 19:29:30.980393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.109 [2024-11-26 19:29:30.980562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.109 [2024-11-26 19:29:30.980753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.109 [2024-11-26 19:29:30.980761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.109 [2024-11-26 19:29:30.980768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.109 [2024-11-26 19:29:30.980774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.109 [2024-11-26 19:29:30.992894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.109 [2024-11-26 19:29:30.993330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:30.993347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:30.993354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:30.993522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:30.993697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:30.993706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:30.993712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:30.993718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.005779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.006183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.006228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.006252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.006776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.006945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.006953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.006962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.006969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.018562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.018975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.018991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.018998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.019166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.019333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.019341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.019347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.019354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.031371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.031768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.031785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.031791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.031950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.032109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.032116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.032122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.032127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.044208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.044531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.044547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.044554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.044735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.044903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.044911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.044917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.044923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.057057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.057462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.057507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.057530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.058058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.058232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.058240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.058246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.058252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.069808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.070200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.070216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.070222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.070382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.070540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.070547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.070553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.070558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.082827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.083246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.083263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.083270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.083442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.083615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.083623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.083629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.083636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.095621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.096036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.096053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.096063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.096231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.096400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.096407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.096413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.096419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.108379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.110 [2024-11-26 19:29:31.108789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-26 19:29:31.108804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.110 [2024-11-26 19:29:31.108811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.110 [2024-11-26 19:29:31.108971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.110 [2024-11-26 19:29:31.109129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.110 [2024-11-26 19:29:31.109136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.110 [2024-11-26 19:29:31.109142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.110 [2024-11-26 19:29:31.109147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.110 [2024-11-26 19:29:31.121141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.111 [2024-11-26 19:29:31.121572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.111 [2024-11-26 19:29:31.121617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.111 [2024-11-26 19:29:31.121640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.111 [2024-11-26 19:29:31.122091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.111 [2024-11-26 19:29:31.122259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.111 [2024-11-26 19:29:31.122267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.111 [2024-11-26 19:29:31.122273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.111 [2024-11-26 19:29:31.122279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.111 [2024-11-26 19:29:31.134174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.111 [2024-11-26 19:29:31.134601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.111 [2024-11-26 19:29:31.134644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.111 [2024-11-26 19:29:31.134667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.111 [2024-11-26 19:29:31.135173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.111 [2024-11-26 19:29:31.135350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.111 [2024-11-26 19:29:31.135358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.111 [2024-11-26 19:29:31.135364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.111 [2024-11-26 19:29:31.135370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.111 [2024-11-26 19:29:31.146900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.111 [2024-11-26 19:29:31.147313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.111 [2024-11-26 19:29:31.147329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.111 [2024-11-26 19:29:31.147336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.111 [2024-11-26 19:29:31.147504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.111 [2024-11-26 19:29:31.147697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.111 [2024-11-26 19:29:31.147705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.111 [2024-11-26 19:29:31.147712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.111 [2024-11-26 19:29:31.147718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.111 [2024-11-26 19:29:31.159958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.111 [2024-11-26 19:29:31.160347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.111 [2024-11-26 19:29:31.160363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.111 [2024-11-26 19:29:31.160370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.111 [2024-11-26 19:29:31.160544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.111 [2024-11-26 19:29:31.160725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.111 [2024-11-26 19:29:31.160733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.111 [2024-11-26 19:29:31.160740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.111 [2024-11-26 19:29:31.160746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.111 [2024-11-26 19:29:31.172965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.111 [2024-11-26 19:29:31.173372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.111 [2024-11-26 19:29:31.173389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.111 [2024-11-26 19:29:31.173396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.111 [2024-11-26 19:29:31.173571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.111 [2024-11-26 19:29:31.173751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.111 [2024-11-26 19:29:31.173760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.111 [2024-11-26 19:29:31.173770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.111 [2024-11-26 19:29:31.173777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.111 [2024-11-26 19:29:31.185937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.111 [2024-11-26 19:29:31.186357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.111 [2024-11-26 19:29:31.186373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.111 [2024-11-26 19:29:31.186381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.111 [2024-11-26 19:29:31.186549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.111 [2024-11-26 19:29:31.186721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.111 [2024-11-26 19:29:31.186730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.111 [2024-11-26 19:29:31.186736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.111 [2024-11-26 19:29:31.186742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.111 [2024-11-26 19:29:31.198912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.111 [2024-11-26 19:29:31.199319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.111 [2024-11-26 19:29:31.199364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.111 [2024-11-26 19:29:31.199387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.111 [2024-11-26 19:29:31.199986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.111 [2024-11-26 19:29:31.200193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.111 [2024-11-26 19:29:31.200200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.111 [2024-11-26 19:29:31.200207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.111 [2024-11-26 19:29:31.200213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.111 [2024-11-26 19:29:31.211768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.111 [2024-11-26 19:29:31.212165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.111 [2024-11-26 19:29:31.212213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.111 [2024-11-26 19:29:31.212237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.111 [2024-11-26 19:29:31.212833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.111 [2024-11-26 19:29:31.213372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.111 [2024-11-26 19:29:31.213379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.111 [2024-11-26 19:29:31.213385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.111 [2024-11-26 19:29:31.213392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.224569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.224971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.225015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.225039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.225489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.225683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.225691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.225697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.225704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.237319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.237750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.237795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.237819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.238342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.238510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.238517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.238523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.238529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.250095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.250484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.250500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.250506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.250665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.250872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.250880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.250887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.250893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.262918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.263314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.263330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.263340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.263508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.263681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.263689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.263696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.263702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.275678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.276098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.276142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.276165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.276598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.276782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.276790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.276796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.276802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.288538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.288955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.288971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.288978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.289146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.289314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.289322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.289328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.289335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.301367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.301784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.301800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.301962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.302172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.302345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.302352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.302358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.302364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.314429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.314840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.314856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.314864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.315037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.315209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.315217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.315224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.315230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.327346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.327747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.327764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.327771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.327939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.373 [2024-11-26 19:29:31.328107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.373 [2024-11-26 19:29:31.328115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.373 [2024-11-26 19:29:31.328121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.373 [2024-11-26 19:29:31.328127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.373 [2024-11-26 19:29:31.340166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.373 [2024-11-26 19:29:31.340560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.373 [2024-11-26 19:29:31.340605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.373 [2024-11-26 19:29:31.340628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.373 [2024-11-26 19:29:31.341225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.341693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.341701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.341710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.341717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.352939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.353328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.353344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.353350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.353509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.353667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.353681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.353687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.353693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.365722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.366107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.366151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.366174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.366657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.366848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.366857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.366863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.366868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 9501.33 IOPS, 37.11 MiB/s [2024-11-26T18:29:31.488Z] [2024-11-26 19:29:31.378561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.378972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.378989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.378996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.379164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.379332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.379340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.379346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.379352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.391440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.391894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.391911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.391918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.392087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.392254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.392262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.392268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.392274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.404244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.404638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.404653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.404661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.404838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.405011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.405019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.405025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.405031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.417135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.417554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.417570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.417577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.417755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.417939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.417947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.417953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.417959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.430126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.430562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.430578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.430591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.430769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.430942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.430950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.430956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.430963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.442982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.443325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.443341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.443348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.443516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.443689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.443697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.443705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.443711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.455818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.456225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.456240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.456247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.374 [2024-11-26 19:29:31.456414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.374 [2024-11-26 19:29:31.456581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.374 [2024-11-26 19:29:31.456589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.374 [2024-11-26 19:29:31.456595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.374 [2024-11-26 19:29:31.456601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.374 [2024-11-26 19:29:31.468566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.374 [2024-11-26 19:29:31.469007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-11-26 19:29:31.469023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-11-26 19:29:31.469031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.375 [2024-11-26 19:29:31.469603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.375 [2024-11-26 19:29:31.469781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.375 [2024-11-26 19:29:31.469789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.375 [2024-11-26 19:29:31.469796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.375 [2024-11-26 19:29:31.469802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.375 [2024-11-26 19:29:31.481585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.375 [2024-11-26 19:29:31.482007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.375 [2024-11-26 19:29:31.482053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.375 [2024-11-26 19:29:31.482076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.375 [2024-11-26 19:29:31.482646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.375 [2024-11-26 19:29:31.482849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.375 [2024-11-26 19:29:31.482857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.375 [2024-11-26 19:29:31.482863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.375 [2024-11-26 19:29:31.482869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.642 [2024-11-26 19:29:31.494557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.642 [2024-11-26 19:29:31.494894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.642 [2024-11-26 19:29:31.494910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.642 [2024-11-26 19:29:31.494917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.642 [2024-11-26 19:29:31.495086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.642 [2024-11-26 19:29:31.495258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.642 [2024-11-26 19:29:31.495266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.642 [2024-11-26 19:29:31.495272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.642 [2024-11-26 19:29:31.495278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.642 [2024-11-26 19:29:31.507461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.642 [2024-11-26 19:29:31.507871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.642 [2024-11-26 19:29:31.507887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.642 [2024-11-26 19:29:31.507894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.642 [2024-11-26 19:29:31.508062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.642 [2024-11-26 19:29:31.508229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.642 [2024-11-26 19:29:31.508236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.642 [2024-11-26 19:29:31.508246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.642 [2024-11-26 19:29:31.508252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.520271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.520706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.520723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.520730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.520898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.521066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.521076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.521082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.521090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.533210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.533630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.533698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.533723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.534173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.534342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.534349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.534355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.534362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.546348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.546754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.546771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.546778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.546952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.547125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.547133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.547140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.547146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.559226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.559625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.559642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.559649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.559824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.559999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.560008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.560014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.560020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.572103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.572509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.572526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.572532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.572706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.572874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.572882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.572888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.572894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.584877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.585160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.585176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.585183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.585351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.585522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.585530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.585537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.585542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.597662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.598016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.598032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.598042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.598216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.598389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.598397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.598403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.598409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.610527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.610924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.610968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.610993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.611577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.612175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.612201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.612222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.612241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.623419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.623763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.623781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.623788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.623961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.624135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.624143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.624149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.624155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.636256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.636704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.636721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.636728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.636895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.637069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.637077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.637084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.637089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.649030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.649444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.649460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.649467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.649635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.649810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.649818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.649824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.649830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.661878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.662302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.662319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.662326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.662500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.662680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.662689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.662696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.662702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.674952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.675357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.675374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.675381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.675554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.675734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.675743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.675753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.675759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.687885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.688221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.688237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.688244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.688411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.688582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.688590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.688596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.688602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.700917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.701272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.701288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.701295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.701467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.701640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.701648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.701655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.701662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.713757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.714192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.714208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.714215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.714384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.643 [2024-11-26 19:29:31.714551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.643 [2024-11-26 19:29:31.714559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.643 [2024-11-26 19:29:31.714565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.643 [2024-11-26 19:29:31.714571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.643 [2024-11-26 19:29:31.726536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.643 [2024-11-26 19:29:31.726908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.643 [2024-11-26 19:29:31.726925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.643 [2024-11-26 19:29:31.726932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.643 [2024-11-26 19:29:31.727100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.644 [2024-11-26 19:29:31.727267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.644 [2024-11-26 19:29:31.727275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.644 [2024-11-26 19:29:31.727281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.644 [2024-11-26 19:29:31.727287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.644 [2024-11-26 19:29:31.739424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.644 [2024-11-26 19:29:31.739825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.644 [2024-11-26 19:29:31.739842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.644 [2024-11-26 19:29:31.739849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.644 [2024-11-26 19:29:31.740031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.644 [2024-11-26 19:29:31.740198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.644 [2024-11-26 19:29:31.740206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.644 [2024-11-26 19:29:31.740212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.644 [2024-11-26 19:29:31.740219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.936 [2024-11-26 19:29:31.752471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.936 [2024-11-26 19:29:31.752892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.936 [2024-11-26 19:29:31.752908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.936 [2024-11-26 19:29:31.752915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.936 [2024-11-26 19:29:31.753088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.936 [2024-11-26 19:29:31.753260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.936 [2024-11-26 19:29:31.753268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.936 [2024-11-26 19:29:31.753274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.937 [2024-11-26 19:29:31.753280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.937 [2024-11-26 19:29:31.765555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.937 [2024-11-26 19:29:31.765854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.937 [2024-11-26 19:29:31.765870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.937 [2024-11-26 19:29:31.765881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.937 [2024-11-26 19:29:31.766055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.937 [2024-11-26 19:29:31.766228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.937 [2024-11-26 19:29:31.766236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.937 [2024-11-26 19:29:31.766242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.937 [2024-11-26 19:29:31.766249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.937 [2024-11-26 19:29:31.778532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.937 [2024-11-26 19:29:31.778843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.937 [2024-11-26 19:29:31.778860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.937 [2024-11-26 19:29:31.778867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.937 [2024-11-26 19:29:31.779039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.937 [2024-11-26 19:29:31.779213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.937 [2024-11-26 19:29:31.779221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.937 [2024-11-26 19:29:31.779228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.937 [2024-11-26 19:29:31.779234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.937 [2024-11-26 19:29:31.791458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.937 [2024-11-26 19:29:31.791873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.937 [2024-11-26 19:29:31.791890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.937 [2024-11-26 19:29:31.791897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.937 [2024-11-26 19:29:31.792071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.937 [2024-11-26 19:29:31.792244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.937 [2024-11-26 19:29:31.792253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.937 [2024-11-26 19:29:31.792259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.937 [2024-11-26 19:29:31.792267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.937 [2024-11-26 19:29:31.804546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.937 [2024-11-26 19:29:31.804891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.937 [2024-11-26 19:29:31.804908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.937 [2024-11-26 19:29:31.804915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.937 [2024-11-26 19:29:31.805088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.937 [2024-11-26 19:29:31.805264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.937 [2024-11-26 19:29:31.805272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.937 [2024-11-26 19:29:31.805278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.937 [2024-11-26 19:29:31.805285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.937 [2024-11-26 19:29:31.817525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.937 [2024-11-26 19:29:31.817910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.937 [2024-11-26 19:29:31.817927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.937 [2024-11-26 19:29:31.817934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.937 [2024-11-26 19:29:31.818107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.937 [2024-11-26 19:29:31.818281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.937 [2024-11-26 19:29:31.818289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.937 [2024-11-26 19:29:31.818295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.937 [2024-11-26 19:29:31.818301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.937 [2024-11-26 19:29:31.830810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.937 [2024-11-26 19:29:31.831204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.937 [2024-11-26 19:29:31.831220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.937 [2024-11-26 19:29:31.831228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.937 [2024-11-26 19:29:31.831412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.937 [2024-11-26 19:29:31.831597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.937 [2024-11-26 19:29:31.831605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.937 [2024-11-26 19:29:31.831613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.937 [2024-11-26 19:29:31.831619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.937 [2024-11-26 19:29:31.843896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.937 [2024-11-26 19:29:31.844355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.937 [2024-11-26 19:29:31.844372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.937 [2024-11-26 19:29:31.844380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.937 [2024-11-26 19:29:31.844563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.937 [2024-11-26 19:29:31.844753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.937 [2024-11-26 19:29:31.844762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.938 [2024-11-26 19:29:31.844772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.938 [2024-11-26 19:29:31.844779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.938 [2024-11-26 19:29:31.857076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.938 [2024-11-26 19:29:31.857515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.938 [2024-11-26 19:29:31.857532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.938 [2024-11-26 19:29:31.857540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.938 [2024-11-26 19:29:31.857731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.938 [2024-11-26 19:29:31.857916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.938 [2024-11-26 19:29:31.857924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.938 [2024-11-26 19:29:31.857931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.938 [2024-11-26 19:29:31.857937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.938 [2024-11-26 19:29:31.870274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.938 [2024-11-26 19:29:31.870656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.938 [2024-11-26 19:29:31.870681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.938 [2024-11-26 19:29:31.870689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.938 [2024-11-26 19:29:31.870872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.938 [2024-11-26 19:29:31.871056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.938 [2024-11-26 19:29:31.871065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.938 [2024-11-26 19:29:31.871072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.938 [2024-11-26 19:29:31.871079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.938 [2024-11-26 19:29:31.883541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.938 [2024-11-26 19:29:31.883942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.938 [2024-11-26 19:29:31.883960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.938 [2024-11-26 19:29:31.883967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.938 [2024-11-26 19:29:31.884152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.938 [2024-11-26 19:29:31.884336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.938 [2024-11-26 19:29:31.884344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.938 [2024-11-26 19:29:31.884352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.938 [2024-11-26 19:29:31.884358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.938 [2024-11-26 19:29:31.896725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.938 [2024-11-26 19:29:31.897093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.938 [2024-11-26 19:29:31.897110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.938 [2024-11-26 19:29:31.897117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.938 [2024-11-26 19:29:31.897301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.938 [2024-11-26 19:29:31.897485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.938 [2024-11-26 19:29:31.897492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.938 [2024-11-26 19:29:31.897499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.938 [2024-11-26 19:29:31.897506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.938 [2024-11-26 19:29:31.909727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.938 [2024-11-26 19:29:31.910157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.938 [2024-11-26 19:29:31.910174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.938 [2024-11-26 19:29:31.910181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.938 [2024-11-26 19:29:31.910354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.938 [2024-11-26 19:29:31.910528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.938 [2024-11-26 19:29:31.910536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.938 [2024-11-26 19:29:31.910542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.938 [2024-11-26 19:29:31.910548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.938 [2024-11-26 19:29:31.922793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.938 [2024-11-26 19:29:31.923249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.938 [2024-11-26 19:29:31.923266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.938 [2024-11-26 19:29:31.923274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.938 [2024-11-26 19:29:31.923458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.938 [2024-11-26 19:29:31.923641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.938 [2024-11-26 19:29:31.923649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.938 [2024-11-26 19:29:31.923656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.938 [2024-11-26 19:29:31.923663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.938 [2024-11-26 19:29:31.935855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.938 [2024-11-26 19:29:31.936266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.938 [2024-11-26 19:29:31.936282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.938 [2024-11-26 19:29:31.936292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.938 [2024-11-26 19:29:31.936466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.938 [2024-11-26 19:29:31.936638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.939 [2024-11-26 19:29:31.936646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.939 [2024-11-26 19:29:31.936652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.939 [2024-11-26 19:29:31.936658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.939 [2024-11-26 19:29:31.948892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.939 [2024-11-26 19:29:31.949340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.939 [2024-11-26 19:29:31.949356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.939 [2024-11-26 19:29:31.949364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.939 [2024-11-26 19:29:31.949536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.939 [2024-11-26 19:29:31.949717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.939 [2024-11-26 19:29:31.949725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.939 [2024-11-26 19:29:31.949731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.939 [2024-11-26 19:29:31.949738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.939 [2024-11-26 19:29:31.961708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.939 [2024-11-26 19:29:31.962133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.939 [2024-11-26 19:29:31.962177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.939 [2024-11-26 19:29:31.962200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.939 [2024-11-26 19:29:31.962798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.939 [2024-11-26 19:29:31.963186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.939 [2024-11-26 19:29:31.963194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.939 [2024-11-26 19:29:31.963200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.939 [2024-11-26 19:29:31.963206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.939 [2024-11-26 19:29:31.974465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.939 [2024-11-26 19:29:31.974834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.939 [2024-11-26 19:29:31.974880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.939 [2024-11-26 19:29:31.974902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.939 [2024-11-26 19:29:31.975486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.939 [2024-11-26 19:29:31.976101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.939 [2024-11-26 19:29:31.976109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.939 [2024-11-26 19:29:31.976115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.939 [2024-11-26 19:29:31.976121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.939 [2024-11-26 19:29:31.987301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.939 [2024-11-26 19:29:31.987722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.939 [2024-11-26 19:29:31.987767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.939 [2024-11-26 19:29:31.987790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.939 [2024-11-26 19:29:31.988231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.939 [2024-11-26 19:29:31.988401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.939 [2024-11-26 19:29:31.988408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.939 [2024-11-26 19:29:31.988414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.939 [2024-11-26 19:29:31.988421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.939 [2024-11-26 19:29:32.000101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.939 [2024-11-26 19:29:32.000495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.939 [2024-11-26 19:29:32.000510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.939 [2024-11-26 19:29:32.000516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.939 [2024-11-26 19:29:32.000682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.939 [2024-11-26 19:29:32.000865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.939 [2024-11-26 19:29:32.000873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.939 [2024-11-26 19:29:32.000879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.939 [2024-11-26 19:29:32.000885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.939 [2024-11-26 19:29:32.012885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.939 [2024-11-26 19:29:32.013327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.939 [2024-11-26 19:29:32.013361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.939 [2024-11-26 19:29:32.013386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.939 [2024-11-26 19:29:32.013925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.939 [2024-11-26 19:29:32.014094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.939 [2024-11-26 19:29:32.014101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.939 [2024-11-26 19:29:32.014111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.939 [2024-11-26 19:29:32.014117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.939 [2024-11-26 19:29:32.025663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.939 [2024-11-26 19:29:32.026088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.939 [2024-11-26 19:29:32.026104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.939 [2024-11-26 19:29:32.026110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.939 [2024-11-26 19:29:32.026270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.939 [2024-11-26 19:29:32.026429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.940 [2024-11-26 19:29:32.026436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.940 [2024-11-26 19:29:32.026442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.940 [2024-11-26 19:29:32.026447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.940 [2024-11-26 19:29:32.038560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.940 [2024-11-26 19:29:32.038984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.940 [2024-11-26 19:29:32.039001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:08.940 [2024-11-26 19:29:32.039008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:08.940 [2024-11-26 19:29:32.039181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:08.940 [2024-11-26 19:29:32.039353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.940 [2024-11-26 19:29:32.039361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.940 [2024-11-26 19:29:32.039367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.940 [2024-11-26 19:29:32.039374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.235 [2024-11-26 19:29:32.051452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.235 [2024-11-26 19:29:32.051896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.235 [2024-11-26 19:29:32.051913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.235 [2024-11-26 19:29:32.051920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.235 [2024-11-26 19:29:32.052093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.235 [2024-11-26 19:29:32.052267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.235 [2024-11-26 19:29:32.052275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.235 [2024-11-26 19:29:32.052281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.235 [2024-11-26 19:29:32.052287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.235 [2024-11-26 19:29:32.064515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.235 [2024-11-26 19:29:32.064926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.235 [2024-11-26 19:29:32.064943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.235 [2024-11-26 19:29:32.064950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.235 [2024-11-26 19:29:32.065122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.235 [2024-11-26 19:29:32.065295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.235 [2024-11-26 19:29:32.065303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.235 [2024-11-26 19:29:32.065309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.235 [2024-11-26 19:29:32.065315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.235 [2024-11-26 19:29:32.077563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.235 [2024-11-26 19:29:32.078013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.235 [2024-11-26 19:29:32.078058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.235 [2024-11-26 19:29:32.078080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.235 [2024-11-26 19:29:32.078533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.235 [2024-11-26 19:29:32.078704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.235 [2024-11-26 19:29:32.078713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.235 [2024-11-26 19:29:32.078719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.235 [2024-11-26 19:29:32.078725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.235 [2024-11-26 19:29:32.090402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.235 [2024-11-26 19:29:32.090822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.235 [2024-11-26 19:29:32.090838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.235 [2024-11-26 19:29:32.090845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.235 [2024-11-26 19:29:32.091004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.235 [2024-11-26 19:29:32.091162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.235 [2024-11-26 19:29:32.091170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.235 [2024-11-26 19:29:32.091175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.235 [2024-11-26 19:29:32.091181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.235 [2024-11-26 19:29:32.103237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.235 [2024-11-26 19:29:32.103561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.235 [2024-11-26 19:29:32.103605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.235 [2024-11-26 19:29:32.103635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.235 [2024-11-26 19:29:32.104123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.235 [2024-11-26 19:29:32.104293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.235 [2024-11-26 19:29:32.104301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.235 [2024-11-26 19:29:32.104307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.235 [2024-11-26 19:29:32.104313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.235 [2024-11-26 19:29:32.116103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.235 [2024-11-26 19:29:32.116519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.235 [2024-11-26 19:29:32.116562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.235 [2024-11-26 19:29:32.116585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.235 [2024-11-26 19:29:32.117166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.235 [2024-11-26 19:29:32.117334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.235 [2024-11-26 19:29:32.117342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.235 [2024-11-26 19:29:32.117349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.235 [2024-11-26 19:29:32.117355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.235 [2024-11-26 19:29:32.128958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.235 [2024-11-26 19:29:32.129300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.235 [2024-11-26 19:29:32.129316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.235 [2024-11-26 19:29:32.129322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.235 [2024-11-26 19:29:32.129481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.235 [2024-11-26 19:29:32.129640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.235 [2024-11-26 19:29:32.129647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.235 [2024-11-26 19:29:32.129653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.235 [2024-11-26 19:29:32.129659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.235 [2024-11-26 19:29:32.141798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.236 [2024-11-26 19:29:32.142256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.236 [2024-11-26 19:29:32.142299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.236 [2024-11-26 19:29:32.142323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.236 [2024-11-26 19:29:32.142921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.236 [2024-11-26 19:29:32.143514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.236 [2024-11-26 19:29:32.143539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.236 [2024-11-26 19:29:32.143565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.236 [2024-11-26 19:29:32.143572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.236 [2024-11-26 19:29:32.154660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.236 [2024-11-26 19:29:32.155079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.236 [2024-11-26 19:29:32.155095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.236 [2024-11-26 19:29:32.155103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.236 [2024-11-26 19:29:32.155271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.236 [2024-11-26 19:29:32.155439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.236 [2024-11-26 19:29:32.155447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.236 [2024-11-26 19:29:32.155453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.236 [2024-11-26 19:29:32.155459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.236 [2024-11-26 19:29:32.167422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.236 [2024-11-26 19:29:32.167836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.236 [2024-11-26 19:29:32.167852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.236 [2024-11-26 19:29:32.167859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.236 [2024-11-26 19:29:32.168017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.236 [2024-11-26 19:29:32.168176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.236 [2024-11-26 19:29:32.168184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.236 [2024-11-26 19:29:32.168189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.236 [2024-11-26 19:29:32.168195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.236 [2024-11-26 19:29:32.180279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.236 [2024-11-26 19:29:32.180731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.236 [2024-11-26 19:29:32.180748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.236 [2024-11-26 19:29:32.180755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.236 [2024-11-26 19:29:32.180927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.236 [2024-11-26 19:29:32.181099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.236 [2024-11-26 19:29:32.181107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.236 [2024-11-26 19:29:32.181117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.236 [2024-11-26 19:29:32.181124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.236 [2024-11-26 19:29:32.193341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.236 [2024-11-26 19:29:32.193691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.236 [2024-11-26 19:29:32.193707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.236 [2024-11-26 19:29:32.193715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.236 [2024-11-26 19:29:32.193888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.236 [2024-11-26 19:29:32.194061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.236 [2024-11-26 19:29:32.194069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.236 [2024-11-26 19:29:32.194075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.236 [2024-11-26 19:29:32.194081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.236 [2024-11-26 19:29:32.206279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.236 [2024-11-26 19:29:32.206697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.236 [2024-11-26 19:29:32.206713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.236 [2024-11-26 19:29:32.206721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.236 [2024-11-26 19:29:32.206888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.236 [2024-11-26 19:29:32.207056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.236 [2024-11-26 19:29:32.207064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.236 [2024-11-26 19:29:32.207070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.236 [2024-11-26 19:29:32.207076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.236 [2024-11-26 19:29:32.219083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.236 [2024-11-26 19:29:32.219503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.236 [2024-11-26 19:29:32.219518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.236 [2024-11-26 19:29:32.219525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.236 [2024-11-26 19:29:32.219690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.236 [2024-11-26 19:29:32.219874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.236 [2024-11-26 19:29:32.219882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.236 [2024-11-26 19:29:32.219887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.236 [2024-11-26 19:29:32.219894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.236 [2024-11-26 19:29:32.231939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.236 [2024-11-26 19:29:32.232352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.236 [2024-11-26 19:29:32.232368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.236 [2024-11-26 19:29:32.232374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.236 [2024-11-26 19:29:32.232550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.236 [2024-11-26 19:29:32.232740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.237 [2024-11-26 19:29:32.232748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.237 [2024-11-26 19:29:32.232755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.237 [2024-11-26 19:29:32.232760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.237 [2024-11-26 19:29:32.244725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.237 [2024-11-26 19:29:32.245142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.237 [2024-11-26 19:29:32.245157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.237 [2024-11-26 19:29:32.245164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.237 [2024-11-26 19:29:32.245321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.237 [2024-11-26 19:29:32.245481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.237 [2024-11-26 19:29:32.245488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.237 [2024-11-26 19:29:32.245494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.237 [2024-11-26 19:29:32.245500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.237 [2024-11-26 19:29:32.257593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.237 [2024-11-26 19:29:32.258023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.237 [2024-11-26 19:29:32.258039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.237 [2024-11-26 19:29:32.258046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.237 [2024-11-26 19:29:32.258215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.237 [2024-11-26 19:29:32.258387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.237 [2024-11-26 19:29:32.258394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.237 [2024-11-26 19:29:32.258401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.237 [2024-11-26 19:29:32.258407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.237 [2024-11-26 19:29:32.270313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.237 [2024-11-26 19:29:32.270713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.237 [2024-11-26 19:29:32.270728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.237 [2024-11-26 19:29:32.270738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.237 [2024-11-26 19:29:32.270898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.237 [2024-11-26 19:29:32.271057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.237 [2024-11-26 19:29:32.271064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.237 [2024-11-26 19:29:32.271070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.237 [2024-11-26 19:29:32.271076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.237 [2024-11-26 19:29:32.283077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.237 [2024-11-26 19:29:32.283514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.237 [2024-11-26 19:29:32.283529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.237 [2024-11-26 19:29:32.283536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.237 [2024-11-26 19:29:32.283710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.237 [2024-11-26 19:29:32.283909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.237 [2024-11-26 19:29:32.283917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.237 [2024-11-26 19:29:32.283923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.237 [2024-11-26 19:29:32.283929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.237 [2024-11-26 19:29:32.295929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.237 [2024-11-26 19:29:32.296231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.237 [2024-11-26 19:29:32.296247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.237 [2024-11-26 19:29:32.296253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.237 [2024-11-26 19:29:32.296412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.237 [2024-11-26 19:29:32.296571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.237 [2024-11-26 19:29:32.296578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.237 [2024-11-26 19:29:32.296584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.237 [2024-11-26 19:29:32.296590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.237 [2024-11-26 19:29:32.308721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.237 [2024-11-26 19:29:32.309140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.237 [2024-11-26 19:29:32.309156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.237 [2024-11-26 19:29:32.309163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.237 [2024-11-26 19:29:32.309323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.237 [2024-11-26 19:29:32.309485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.237 [2024-11-26 19:29:32.309493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.237 [2024-11-26 19:29:32.309499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.237 [2024-11-26 19:29:32.309505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.237 [2024-11-26 19:29:32.321567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.237 [2024-11-26 19:29:32.322014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.237 [2024-11-26 19:29:32.322031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.238 [2024-11-26 19:29:32.322038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.238 [2024-11-26 19:29:32.322211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.238 [2024-11-26 19:29:32.322384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.238 [2024-11-26 19:29:32.322392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.238 [2024-11-26 19:29:32.322398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.238 [2024-11-26 19:29:32.322405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.540 [2024-11-26 19:29:32.334626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.540 [2024-11-26 19:29:32.335061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.540 [2024-11-26 19:29:32.335078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.540 [2024-11-26 19:29:32.335086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.540 [2024-11-26 19:29:32.335259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.540 [2024-11-26 19:29:32.335432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.540 [2024-11-26 19:29:32.335440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.540 [2024-11-26 19:29:32.335446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.540 [2024-11-26 19:29:32.335452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.540 [2024-11-26 19:29:32.347526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.540 [2024-11-26 19:29:32.347956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.540 [2024-11-26 19:29:32.347972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.540 [2024-11-26 19:29:32.347979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.540 [2024-11-26 19:29:32.348147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.540 [2024-11-26 19:29:32.348314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.540 [2024-11-26 19:29:32.348322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.540 [2024-11-26 19:29:32.348332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.540 [2024-11-26 19:29:32.348338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.540 [2024-11-26 19:29:32.360520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.540 [2024-11-26 19:29:32.360959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.540 [2024-11-26 19:29:32.360976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.540 [2024-11-26 19:29:32.360983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.540 [2024-11-26 19:29:32.361155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.540 [2024-11-26 19:29:32.361328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.540 [2024-11-26 19:29:32.361336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.540 [2024-11-26 19:29:32.361342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.540 [2024-11-26 19:29:32.361349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.540 7126.00 IOPS, 27.84 MiB/s [2024-11-26T18:29:32.654Z] [2024-11-26 19:29:32.374713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.540 [2024-11-26 19:29:32.375151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.540 [2024-11-26 19:29:32.375167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.540 [2024-11-26 19:29:32.375174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.540 [2024-11-26 19:29:32.375342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.540 [2024-11-26 19:29:32.375509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.540 [2024-11-26 19:29:32.375517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.540 [2024-11-26 19:29:32.375523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.540 [2024-11-26 19:29:32.375529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.540 [2024-11-26 19:29:32.387612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.540 [2024-11-26 19:29:32.388045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.540 [2024-11-26 19:29:32.388087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.540 [2024-11-26 19:29:32.388112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.540 [2024-11-26 19:29:32.388708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.540 [2024-11-26 19:29:32.389289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.540 [2024-11-26 19:29:32.389296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.540 [2024-11-26 19:29:32.389302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.540 [2024-11-26 19:29:32.389309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.540 [2024-11-26 19:29:32.400534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.540 [2024-11-26 19:29:32.400979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.540 [2024-11-26 19:29:32.400995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.540 [2024-11-26 19:29:32.401003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.540 [2024-11-26 19:29:32.401171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.540 [2024-11-26 19:29:32.401339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.540 [2024-11-26 19:29:32.401347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.541 [2024-11-26 19:29:32.401353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.541 [2024-11-26 19:29:32.401359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.541 [2024-11-26 19:29:32.413365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.541 [2024-11-26 19:29:32.413746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.541 [2024-11-26 19:29:32.413790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.541 [2024-11-26 19:29:32.413814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.541 [2024-11-26 19:29:32.414258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.541 [2024-11-26 19:29:32.414418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.541 [2024-11-26 19:29:32.414425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.541 [2024-11-26 19:29:32.414431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.541 [2024-11-26 19:29:32.414437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.541 [2024-11-26 19:29:32.426240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.541 [2024-11-26 19:29:32.426700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.541 [2024-11-26 19:29:32.426747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.541 [2024-11-26 19:29:32.426769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.541 [2024-11-26 19:29:32.427248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.541 [2024-11-26 19:29:32.427417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.541 [2024-11-26 19:29:32.427426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.541 [2024-11-26 19:29:32.427434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.541 [2024-11-26 19:29:32.427441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.541 [2024-11-26 19:29:32.439093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.541 [2024-11-26 19:29:32.439526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.541 [2024-11-26 19:29:32.439542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.541 [2024-11-26 19:29:32.439553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.541 [2024-11-26 19:29:32.439732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.541 [2024-11-26 19:29:32.439910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.541 [2024-11-26 19:29:32.439918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.541 [2024-11-26 19:29:32.439926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.541 [2024-11-26 19:29:32.439932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.541 [2024-11-26 19:29:32.452170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.541 [2024-11-26 19:29:32.452603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.541 [2024-11-26 19:29:32.452618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.541 [2024-11-26 19:29:32.452626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.541 [2024-11-26 19:29:32.452804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.541 [2024-11-26 19:29:32.452986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.541 [2024-11-26 19:29:32.452994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.541 [2024-11-26 19:29:32.453000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.541 [2024-11-26 19:29:32.453006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.541 [2024-11-26 19:29:32.465071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.541 [2024-11-26 19:29:32.465506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.541 [2024-11-26 19:29:32.465550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.541 [2024-11-26 19:29:32.465574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.541 [2024-11-26 19:29:32.466153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.541 [2024-11-26 19:29:32.466330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.541 [2024-11-26 19:29:32.466355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.541 [2024-11-26 19:29:32.466370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.541 [2024-11-26 19:29:32.466384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.541 [2024-11-26 19:29:32.479958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.541 [2024-11-26 19:29:32.480452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.541 [2024-11-26 19:29:32.480473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.541 [2024-11-26 19:29:32.480483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.541 [2024-11-26 19:29:32.480745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.541 [2024-11-26 19:29:32.481004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.541 [2024-11-26 19:29:32.481016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.541 [2024-11-26 19:29:32.481025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.541 [2024-11-26 19:29:32.481034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.541 [2024-11-26 19:29:32.493003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.541 [2024-11-26 19:29:32.493444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.541 [2024-11-26 19:29:32.493486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.541 [2024-11-26 19:29:32.493510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.541 [2024-11-26 19:29:32.494050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.541 [2024-11-26 19:29:32.494224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.541 [2024-11-26 19:29:32.494232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.541 [2024-11-26 19:29:32.494238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.541 [2024-11-26 19:29:32.494244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.541 [2024-11-26 19:29:32.505811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.541 [2024-11-26 19:29:32.506253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.541 [2024-11-26 19:29:32.506269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.541 [2024-11-26 19:29:32.506276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.541 [2024-11-26 19:29:32.506444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.542 [2024-11-26 19:29:32.506616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.542 [2024-11-26 19:29:32.506623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.542 [2024-11-26 19:29:32.506630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.542 [2024-11-26 19:29:32.506636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.542 [2024-11-26 19:29:32.518533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.542 [2024-11-26 19:29:32.518874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.542 [2024-11-26 19:29:32.518930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.542 [2024-11-26 19:29:32.518954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.542 [2024-11-26 19:29:32.519452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.542 [2024-11-26 19:29:32.519620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.542 [2024-11-26 19:29:32.519628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.542 [2024-11-26 19:29:32.519638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.542 [2024-11-26 19:29:32.519644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.542 [2024-11-26 19:29:32.531345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.542 [2024-11-26 19:29:32.531710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.542 [2024-11-26 19:29:32.531726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.542 [2024-11-26 19:29:32.531733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.542 [2024-11-26 19:29:32.531901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.542 [2024-11-26 19:29:32.532072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.542 [2024-11-26 19:29:32.532080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.542 [2024-11-26 19:29:32.532086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.542 [2024-11-26 19:29:32.532092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.542 [2024-11-26 19:29:32.544138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.542 [2024-11-26 19:29:32.544572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.542 [2024-11-26 19:29:32.544617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.542 [2024-11-26 19:29:32.544641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.542 [2024-11-26 19:29:32.545210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.542 [2024-11-26 19:29:32.545379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.542 [2024-11-26 19:29:32.545387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.542 [2024-11-26 19:29:32.545394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.542 [2024-11-26 19:29:32.545400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.542 [2024-11-26 19:29:32.556993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.542 [2024-11-26 19:29:32.557431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.542 [2024-11-26 19:29:32.557447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.542 [2024-11-26 19:29:32.557454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.542 [2024-11-26 19:29:32.557623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.542 [2024-11-26 19:29:32.557798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.542 [2024-11-26 19:29:32.557807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.542 [2024-11-26 19:29:32.557813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.542 [2024-11-26 19:29:32.557819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.542 [2024-11-26 19:29:32.569825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.542 [2024-11-26 19:29:32.570171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.542 [2024-11-26 19:29:32.570188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.542 [2024-11-26 19:29:32.570195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.542 [2024-11-26 19:29:32.570363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.542 [2024-11-26 19:29:32.570530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.542 [2024-11-26 19:29:32.570538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.542 [2024-11-26 19:29:32.570545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.542 [2024-11-26 19:29:32.570551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.542 [2024-11-26 19:29:32.582718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.542 [2024-11-26 19:29:32.583071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.542 [2024-11-26 19:29:32.583114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.542 [2024-11-26 19:29:32.583137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.542 [2024-11-26 19:29:32.583605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.542 [2024-11-26 19:29:32.583797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.542 [2024-11-26 19:29:32.583806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.542 [2024-11-26 19:29:32.583812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.542 [2024-11-26 19:29:32.583818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.542 [2024-11-26 19:29:32.595588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.542 [2024-11-26 19:29:32.596034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.542 [2024-11-26 19:29:32.596050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.542 [2024-11-26 19:29:32.596057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.542 [2024-11-26 19:29:32.596225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.542 [2024-11-26 19:29:32.596395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.542 [2024-11-26 19:29:32.596403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.542 [2024-11-26 19:29:32.596409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.542 [2024-11-26 19:29:32.596415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.542 [2024-11-26 19:29:32.608674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.542 [2024-11-26 19:29:32.609101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.542 [2024-11-26 19:29:32.609118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.542 [2024-11-26 19:29:32.609128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.543 [2024-11-26 19:29:32.609302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.543 [2024-11-26 19:29:32.609474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.543 [2024-11-26 19:29:32.609481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.543 [2024-11-26 19:29:32.609488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.543 [2024-11-26 19:29:32.609494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.543 [2024-11-26 19:29:32.621710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.543 [2024-11-26 19:29:32.622137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.543 [2024-11-26 19:29:32.622152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.543 [2024-11-26 19:29:32.622160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.543 [2024-11-26 19:29:32.622334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.543 [2024-11-26 19:29:32.622507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.543 [2024-11-26 19:29:32.622515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.543 [2024-11-26 19:29:32.622521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.543 [2024-11-26 19:29:32.622527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.543 [2024-11-26 19:29:32.634673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.543 [2024-11-26 19:29:32.635046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.543 [2024-11-26 19:29:32.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.543 [2024-11-26 19:29:32.635069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.543 [2024-11-26 19:29:32.635237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.543 [2024-11-26 19:29:32.635405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.543 [2024-11-26 19:29:32.635412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.543 [2024-11-26 19:29:32.635419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.543 [2024-11-26 19:29:32.635425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.829 [2024-11-26 19:29:32.647645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.829 [2024-11-26 19:29:32.648080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.829 [2024-11-26 19:29:32.648096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.829 [2024-11-26 19:29:32.648103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.829 [2024-11-26 19:29:32.648276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.829 [2024-11-26 19:29:32.648453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.829 [2024-11-26 19:29:32.648460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.829 [2024-11-26 19:29:32.648467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.829 [2024-11-26 19:29:32.648473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.829 [2024-11-26 19:29:32.660726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.829 [2024-11-26 19:29:32.661158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.829 [2024-11-26 19:29:32.661175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.829 [2024-11-26 19:29:32.661182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.829 [2024-11-26 19:29:32.661355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.829 [2024-11-26 19:29:32.661528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.829 [2024-11-26 19:29:32.661535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.829 [2024-11-26 19:29:32.661541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.829 [2024-11-26 19:29:32.661547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.829 [2024-11-26 19:29:32.673757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.829 [2024-11-26 19:29:32.674208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.829 [2024-11-26 19:29:32.674250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.829 [2024-11-26 19:29:32.674274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.829 [2024-11-26 19:29:32.674802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.829 [2024-11-26 19:29:32.674976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.829 [2024-11-26 19:29:32.674984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.829 [2024-11-26 19:29:32.674991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.829 [2024-11-26 19:29:32.674997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.829 [2024-11-26 19:29:32.686690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.829 [2024-11-26 19:29:32.687128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.829 [2024-11-26 19:29:32.687144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.829 [2024-11-26 19:29:32.687150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.829 [2024-11-26 19:29:32.687318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.829 [2024-11-26 19:29:32.687490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.829 [2024-11-26 19:29:32.687497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.829 [2024-11-26 19:29:32.687508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.829 [2024-11-26 19:29:32.687515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.829 [2024-11-26 19:29:32.699490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.829 [2024-11-26 19:29:32.699857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.829 [2024-11-26 19:29:32.699874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.829 [2024-11-26 19:29:32.699881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.829 [2024-11-26 19:29:32.700054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.829 [2024-11-26 19:29:32.700226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.829 [2024-11-26 19:29:32.700234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.829 [2024-11-26 19:29:32.700240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.829 [2024-11-26 19:29:32.700246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.829 [2024-11-26 19:29:32.712529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.829 [2024-11-26 19:29:32.712965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.829 [2024-11-26 19:29:32.712981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.829 [2024-11-26 19:29:32.712989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.829 [2024-11-26 19:29:32.713162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.829 [2024-11-26 19:29:32.713335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.829 [2024-11-26 19:29:32.713343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.829 [2024-11-26 19:29:32.713349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.829 [2024-11-26 19:29:32.713355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.829 [2024-11-26 19:29:32.725465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.829 [2024-11-26 19:29:32.725864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.829 [2024-11-26 19:29:32.725880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.829 [2024-11-26 19:29:32.725887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.829 [2024-11-26 19:29:32.726055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.829 [2024-11-26 19:29:32.726223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.829 [2024-11-26 19:29:32.726231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.829 [2024-11-26 19:29:32.726237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.726243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.738230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.738615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.738631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.738638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.738824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.738992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.739000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.739006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.739012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.751045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.751489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.751532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.751554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.752002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.752170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.752177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.752184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.752190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.763852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.764281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.764296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.764303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.764461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.764619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.764626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.764632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.764638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.776636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.777068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.777112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.777142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.777741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.778182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.778190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.778196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.778202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.789497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.789929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.789976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.789999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.790582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.791129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.791137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.791143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.791149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.802260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.802691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.802735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.802759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.803207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.803374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.803382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.803388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.803394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.814997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.815405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.815420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.815427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.815586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.815770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.815778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.815784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.815791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.827830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.828183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.828199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.828206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.828374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.828541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.828549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.828555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.828561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.840663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.841089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.841104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.841111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.841279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.841446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.841454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.841460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.830 [2024-11-26 19:29:32.841466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.830 [2024-11-26 19:29:32.853547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.830 [2024-11-26 19:29:32.853939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.830 [2024-11-26 19:29:32.853955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.830 [2024-11-26 19:29:32.853963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.830 [2024-11-26 19:29:32.854130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.830 [2024-11-26 19:29:32.854297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.830 [2024-11-26 19:29:32.854305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.830 [2024-11-26 19:29:32.854315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.831 [2024-11-26 19:29:32.854321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.831 [2024-11-26 19:29:32.866306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.831 [2024-11-26 19:29:32.866693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.831 [2024-11-26 19:29:32.866724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.831 [2024-11-26 19:29:32.866731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.831 [2024-11-26 19:29:32.866900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.831 [2024-11-26 19:29:32.867071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.831 [2024-11-26 19:29:32.867079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.831 [2024-11-26 19:29:32.867085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.831 [2024-11-26 19:29:32.867091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.831 [2024-11-26 19:29:32.879120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.831 [2024-11-26 19:29:32.879510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.831 [2024-11-26 19:29:32.879526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.831 [2024-11-26 19:29:32.879532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.831 [2024-11-26 19:29:32.879712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.831 [2024-11-26 19:29:32.879879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.831 [2024-11-26 19:29:32.879887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.831 [2024-11-26 19:29:32.879893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.831 [2024-11-26 19:29:32.879899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.831 [2024-11-26 19:29:32.891917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.831 [2024-11-26 19:29:32.892346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.831 [2024-11-26 19:29:32.892390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.831 [2024-11-26 19:29:32.892413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.831 [2024-11-26 19:29:32.893009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.831 [2024-11-26 19:29:32.893529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.831 [2024-11-26 19:29:32.893537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.831 [2024-11-26 19:29:32.893543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.831 [2024-11-26 19:29:32.893550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.831 [2024-11-26 19:29:32.904741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.831 [2024-11-26 19:29:32.905110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.831 [2024-11-26 19:29:32.905125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.831 [2024-11-26 19:29:32.905132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.831 [2024-11-26 19:29:32.905291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.831 [2024-11-26 19:29:32.905450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.831 [2024-11-26 19:29:32.905456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.831 [2024-11-26 19:29:32.905462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.831 [2024-11-26 19:29:32.905467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.831 [2024-11-26 19:29:32.917812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.831 [2024-11-26 19:29:32.918220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.831 [2024-11-26 19:29:32.918237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.831 [2024-11-26 19:29:32.918244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.831 [2024-11-26 19:29:32.918416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.831 [2024-11-26 19:29:32.918589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.831 [2024-11-26 19:29:32.918598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.831 [2024-11-26 19:29:32.918604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.831 [2024-11-26 19:29:32.918610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.831 [2024-11-26 19:29:32.930811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.831 [2024-11-26 19:29:32.931221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.831 [2024-11-26 19:29:32.931237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:09.831 [2024-11-26 19:29:32.931244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:09.831 [2024-11-26 19:29:32.931417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:09.831 [2024-11-26 19:29:32.931590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.831 [2024-11-26 19:29:32.931598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.831 [2024-11-26 19:29:32.931604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.831 [2024-11-26 19:29:32.931611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.091 [2024-11-26 19:29:32.943936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.091 [2024-11-26 19:29:32.944349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.091 [2024-11-26 19:29:32.944366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.091 [2024-11-26 19:29:32.944377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.091 [2024-11-26 19:29:32.944551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.091 [2024-11-26 19:29:32.944729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.091 [2024-11-26 19:29:32.944737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.092 [2024-11-26 19:29:32.944743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.092 [2024-11-26 19:29:32.944750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.092 [2024-11-26 19:29:32.957006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.092 [2024-11-26 19:29:32.957358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.092 [2024-11-26 19:29:32.957375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.092 [2024-11-26 19:29:32.957383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.092 [2024-11-26 19:29:32.957566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.092 [2024-11-26 19:29:32.957755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.092 [2024-11-26 19:29:32.957764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.092 [2024-11-26 19:29:32.957770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.092 [2024-11-26 19:29:32.957777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.092 [2024-11-26 19:29:32.970047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.092 [2024-11-26 19:29:32.970432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.092 [2024-11-26 19:29:32.970475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.092 [2024-11-26 19:29:32.970498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.092 [2024-11-26 19:29:32.971097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.092 [2024-11-26 19:29:32.971679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.092 [2024-11-26 19:29:32.971697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.092 [2024-11-26 19:29:32.971712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.092 [2024-11-26 19:29:32.971725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.092 [2024-11-26 19:29:32.985165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.092 [2024-11-26 19:29:32.985607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.092 [2024-11-26 19:29:32.985629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.092 [2024-11-26 19:29:32.985640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.092 [2024-11-26 19:29:32.985902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.092 [2024-11-26 19:29:32.986162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.092 [2024-11-26 19:29:32.986173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.092 [2024-11-26 19:29:32.986182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.092 [2024-11-26 19:29:32.986191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.092 [2024-11-26 19:29:32.998261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.092 [2024-11-26 19:29:32.998699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.092 [2024-11-26 19:29:32.998743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.092 [2024-11-26 19:29:32.998766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.092 [2024-11-26 19:29:32.999348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.092 [2024-11-26 19:29:32.999803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.092 [2024-11-26 19:29:32.999811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.092 [2024-11-26 19:29:32.999818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.092 [2024-11-26 19:29:32.999824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.092 [2024-11-26 19:29:33.011034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.092 [2024-11-26 19:29:33.011474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.092 [2024-11-26 19:29:33.011517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.092 [2024-11-26 19:29:33.011539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.092 [2024-11-26 19:29:33.012136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.092 [2024-11-26 19:29:33.012722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.092 [2024-11-26 19:29:33.012740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.092 [2024-11-26 19:29:33.012754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.092 [2024-11-26 19:29:33.012767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.092 [2024-11-26 19:29:33.026110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.092 [2024-11-26 19:29:33.026580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.092 [2024-11-26 19:29:33.026600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.092 [2024-11-26 19:29:33.026611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.092 [2024-11-26 19:29:33.026875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.092 [2024-11-26 19:29:33.027130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.092 [2024-11-26 19:29:33.027141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.092 [2024-11-26 19:29:33.027154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.092 [2024-11-26 19:29:33.027163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.092 [2024-11-26 19:29:33.039220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.092 [2024-11-26 19:29:33.039620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.092 [2024-11-26 19:29:33.039636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.092 [2024-11-26 19:29:33.039643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.092 [2024-11-26 19:29:33.039822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.092 [2024-11-26 19:29:33.039995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.092 [2024-11-26 19:29:33.040003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.092 [2024-11-26 19:29:33.040009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.093 [2024-11-26 19:29:33.040016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.093 [2024-11-26 19:29:33.052013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.093 [2024-11-26 19:29:33.052384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.093 [2024-11-26 19:29:33.052400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.093 [2024-11-26 19:29:33.052406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.093 [2024-11-26 19:29:33.052575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.093 [2024-11-26 19:29:33.052752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.093 [2024-11-26 19:29:33.052760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.093 [2024-11-26 19:29:33.052766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.093 [2024-11-26 19:29:33.052772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.093 [2024-11-26 19:29:33.064911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.093 [2024-11-26 19:29:33.065237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.093 [2024-11-26 19:29:33.065252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.093 [2024-11-26 19:29:33.065259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.093 [2024-11-26 19:29:33.065427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.093 [2024-11-26 19:29:33.065595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.093 [2024-11-26 19:29:33.065602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.093 [2024-11-26 19:29:33.065608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.093 [2024-11-26 19:29:33.065614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.093 [2024-11-26 19:29:33.077675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.093 [2024-11-26 19:29:33.078106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.093 [2024-11-26 19:29:33.078122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.093 [2024-11-26 19:29:33.078129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.093 [2024-11-26 19:29:33.078297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.093 [2024-11-26 19:29:33.078465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.093 [2024-11-26 19:29:33.078472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.093 [2024-11-26 19:29:33.078478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.093 [2024-11-26 19:29:33.078484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.093 [2024-11-26 19:29:33.090523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.093 [2024-11-26 19:29:33.090834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.093 [2024-11-26 19:29:33.090850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.093 [2024-11-26 19:29:33.090857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.093 [2024-11-26 19:29:33.091025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.093 [2024-11-26 19:29:33.091192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.093 [2024-11-26 19:29:33.091200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.093 [2024-11-26 19:29:33.091206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.093 [2024-11-26 19:29:33.091212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.093 [2024-11-26 19:29:33.103476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.093 [2024-11-26 19:29:33.103823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.093 [2024-11-26 19:29:33.103840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.093 [2024-11-26 19:29:33.103847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.093 [2024-11-26 19:29:33.104015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.093 [2024-11-26 19:29:33.104183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.093 [2024-11-26 19:29:33.104190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.093 [2024-11-26 19:29:33.104197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.093 [2024-11-26 19:29:33.104203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.093 [2024-11-26 19:29:33.116247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.093 [2024-11-26 19:29:33.116696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.093 [2024-11-26 19:29:33.116742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.093 [2024-11-26 19:29:33.116772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.093 [2024-11-26 19:29:33.117355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.093 [2024-11-26 19:29:33.117963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.093 [2024-11-26 19:29:33.117971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.093 [2024-11-26 19:29:33.117978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.093 [2024-11-26 19:29:33.117984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.093 [2024-11-26 19:29:33.129162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.093 [2024-11-26 19:29:33.129453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.093 [2024-11-26 19:29:33.129469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.093 [2024-11-26 19:29:33.129475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.093 [2024-11-26 19:29:33.129643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.093 [2024-11-26 19:29:33.129817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.093 [2024-11-26 19:29:33.129825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.093 [2024-11-26 19:29:33.129831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.093 [2024-11-26 19:29:33.129837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.093 [2024-11-26 19:29:33.142149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.093 [2024-11-26 19:29:33.142514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.093 [2024-11-26 19:29:33.142530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.093 [2024-11-26 19:29:33.142538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.093 [2024-11-26 19:29:33.142717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.093 [2024-11-26 19:29:33.142889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.094 [2024-11-26 19:29:33.142897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.094 [2024-11-26 19:29:33.142904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.094 [2024-11-26 19:29:33.142910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.094 [2024-11-26 19:29:33.154928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.094 [2024-11-26 19:29:33.155230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.094 [2024-11-26 19:29:33.155246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.094 [2024-11-26 19:29:33.155252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.094 [2024-11-26 19:29:33.155420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.094 [2024-11-26 19:29:33.155596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.094 [2024-11-26 19:29:33.155603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.094 [2024-11-26 19:29:33.155609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.094 [2024-11-26 19:29:33.155615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.094 [2024-11-26 19:29:33.167783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.094 [2024-11-26 19:29:33.168199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.094 [2024-11-26 19:29:33.168215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.094 [2024-11-26 19:29:33.168222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.094 [2024-11-26 19:29:33.168390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.094 [2024-11-26 19:29:33.168557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.094 [2024-11-26 19:29:33.168565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.094 [2024-11-26 19:29:33.168571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.094 [2024-11-26 19:29:33.168577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.094 [2024-11-26 19:29:33.180613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.094 [2024-11-26 19:29:33.180986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.094 [2024-11-26 19:29:33.181002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.094 [2024-11-26 19:29:33.181010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.094 [2024-11-26 19:29:33.181177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.094 [2024-11-26 19:29:33.181345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.094 [2024-11-26 19:29:33.181353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.094 [2024-11-26 19:29:33.181359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.094 [2024-11-26 19:29:33.181365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.094 [2024-11-26 19:29:33.193526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.094 [2024-11-26 19:29:33.193936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.094 [2024-11-26 19:29:33.193952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.094 [2024-11-26 19:29:33.193959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.094 [2024-11-26 19:29:33.194128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.094 [2024-11-26 19:29:33.194295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.094 [2024-11-26 19:29:33.194303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.094 [2024-11-26 19:29:33.194314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.094 [2024-11-26 19:29:33.194320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.354 [2024-11-26 19:29:33.206589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.354 [2024-11-26 19:29:33.206931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.354 [2024-11-26 19:29:33.206947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.354 [2024-11-26 19:29:33.206955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.354 [2024-11-26 19:29:33.207122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.354 [2024-11-26 19:29:33.207309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.354 [2024-11-26 19:29:33.207317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.354 [2024-11-26 19:29:33.207323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.354 [2024-11-26 19:29:33.207329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.354 [2024-11-26 19:29:33.219570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.354 [2024-11-26 19:29:33.219980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.354 [2024-11-26 19:29:33.219996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.354 [2024-11-26 19:29:33.220004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.354 [2024-11-26 19:29:33.220176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.354 [2024-11-26 19:29:33.220349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.354 [2024-11-26 19:29:33.220357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.354 [2024-11-26 19:29:33.220363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.354 [2024-11-26 19:29:33.220369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.354 [2024-11-26 19:29:33.232591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.354 [2024-11-26 19:29:33.232895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.354 [2024-11-26 19:29:33.232911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.354 [2024-11-26 19:29:33.232918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.354 [2024-11-26 19:29:33.233096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.354 [2024-11-26 19:29:33.233264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.354 [2024-11-26 19:29:33.233272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.354 [2024-11-26 19:29:33.233278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.354 [2024-11-26 19:29:33.233284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.354 [2024-11-26 19:29:33.245483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.354 [2024-11-26 19:29:33.245908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.354 [2024-11-26 19:29:33.245952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.354 [2024-11-26 19:29:33.245975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.354 [2024-11-26 19:29:33.246427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.354 [2024-11-26 19:29:33.246595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.354 [2024-11-26 19:29:33.246603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.354 [2024-11-26 19:29:33.246609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.354 [2024-11-26 19:29:33.246615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.354 [2024-11-26 19:29:33.258373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.354 [2024-11-26 19:29:33.258721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.354 [2024-11-26 19:29:33.258742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.354 [2024-11-26 19:29:33.258750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.354 [2024-11-26 19:29:33.258918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.354 [2024-11-26 19:29:33.259087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.259095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.259100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.259107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.271400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.271800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.271817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.271825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.271998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.272170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.272178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.272184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.272191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.284493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.284850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.284867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.284877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.285050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.285224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.285233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.285239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.285245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.297494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.297801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.297817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.297824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.297992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.298160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.298168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.298174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.298180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.310333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.310703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.310719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.310727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.310904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.311063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.311071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.311077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.311082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.323273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.323697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.323713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.323721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.323889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.324063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.324070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.324076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.324082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.336223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.336624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.336639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.336646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.336819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.336987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.336995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.337001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.337007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.349181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.349608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.349652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.349689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.350275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.350473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.350482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.350489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.350496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.362063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.362462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.362478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.362485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.362654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.362827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.362840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.362850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.362856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 5700.80 IOPS, 22.27 MiB/s [2024-11-26T18:29:33.469Z] [2024-11-26 19:29:33.376366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.376741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.376757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.355 [2024-11-26 19:29:33.376764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.355 [2024-11-26 19:29:33.376933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.355 [2024-11-26 19:29:33.377105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.355 [2024-11-26 19:29:33.377112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.355 [2024-11-26 19:29:33.377120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.355 [2024-11-26 19:29:33.377126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.355 [2024-11-26 19:29:33.389316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.355 [2024-11-26 19:29:33.389660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.355 [2024-11-26 19:29:33.389717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.356 [2024-11-26 19:29:33.389741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.356 [2024-11-26 19:29:33.390323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.356 [2024-11-26 19:29:33.390545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.356 [2024-11-26 19:29:33.390552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.356 [2024-11-26 19:29:33.390558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.356 [2024-11-26 19:29:33.390564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.356 [2024-11-26 19:29:33.402233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.356 [2024-11-26 19:29:33.402576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.356 [2024-11-26 19:29:33.402619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.356 [2024-11-26 19:29:33.402642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.356 [2024-11-26 19:29:33.403238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.356 [2024-11-26 19:29:33.403724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.356 [2024-11-26 19:29:33.403733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.356 [2024-11-26 19:29:33.403739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.356 [2024-11-26 19:29:33.403745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.356 [2024-11-26 19:29:33.414997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.356 [2024-11-26 19:29:33.415394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.356 [2024-11-26 19:29:33.415410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.356 [2024-11-26 19:29:33.415416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.356 [2024-11-26 19:29:33.415575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.356 [2024-11-26 19:29:33.415756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.356 [2024-11-26 19:29:33.415765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.356 [2024-11-26 19:29:33.415771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.356 [2024-11-26 19:29:33.415777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.356 [2024-11-26 19:29:33.427782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.356 [2024-11-26 19:29:33.428167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.356 [2024-11-26 19:29:33.428183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.356 [2024-11-26 19:29:33.428190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.356 [2024-11-26 19:29:33.428349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.356 [2024-11-26 19:29:33.428508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.356 [2024-11-26 19:29:33.428515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.356 [2024-11-26 19:29:33.428521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.356 [2024-11-26 19:29:33.428527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.356 [2024-11-26 19:29:33.440662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.356 [2024-11-26 19:29:33.441077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.356 [2024-11-26 19:29:33.441094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.356 [2024-11-26 19:29:33.441102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.356 [2024-11-26 19:29:33.441276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.356 [2024-11-26 19:29:33.441451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.356 [2024-11-26 19:29:33.441459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.356 [2024-11-26 19:29:33.441466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.356 [2024-11-26 19:29:33.441472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.356 [2024-11-26 19:29:33.453509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.356 [2024-11-26 19:29:33.453903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.356 [2024-11-26 19:29:33.453920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.356 [2024-11-26 19:29:33.453931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.356 [2024-11-26 19:29:33.454098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.356 [2024-11-26 19:29:33.454266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.356 [2024-11-26 19:29:33.454273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.356 [2024-11-26 19:29:33.454279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.356 [2024-11-26 19:29:33.454285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.624 [2024-11-26 19:29:33.466562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.624 [2024-11-26 19:29:33.466988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.624 [2024-11-26 19:29:33.467004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.624 [2024-11-26 19:29:33.467012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.624 [2024-11-26 19:29:33.467184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.624 [2024-11-26 19:29:33.467357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.624 [2024-11-26 19:29:33.467365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.624 [2024-11-26 19:29:33.467372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.624 [2024-11-26 19:29:33.467378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.624 [2024-11-26 19:29:33.479639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.624 [2024-11-26 19:29:33.480050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.624 [2024-11-26 19:29:33.480067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.624 [2024-11-26 19:29:33.480074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.624 [2024-11-26 19:29:33.480248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.624 [2024-11-26 19:29:33.480422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.624 [2024-11-26 19:29:33.480430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.624 [2024-11-26 19:29:33.480437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.480442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.492666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.492960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.492975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.492982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.493150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.493322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.493330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.493336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.493342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.505589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.506031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.506075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.506099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.506621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.506795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.506803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.506809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.506815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.518410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.518801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.518817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.518824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.518982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.519141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.519148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.519154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.519160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.531145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.531535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.531551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.531558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.531739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.531907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.531915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.531924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.531930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.544043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.544362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.544378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.544385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.544553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.544743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.544751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.544757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.544764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.556879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.557295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.557312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.557319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.557487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.557654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.557661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.557668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.557682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.569664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.570083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.570127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.570150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.570748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.571151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.571159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.571165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.571171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.582495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.582897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.582914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.582921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.583090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.583257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.583265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.583271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.583277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.595293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.595714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.595731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.595737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.595896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.596055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.596062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.625 [2024-11-26 19:29:33.596068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.625 [2024-11-26 19:29:33.596074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.625 [2024-11-26 19:29:33.608254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.625 [2024-11-26 19:29:33.608676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.625 [2024-11-26 19:29:33.608692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.625 [2024-11-26 19:29:33.608699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.625 [2024-11-26 19:29:33.608867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.625 [2024-11-26 19:29:33.609036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.625 [2024-11-26 19:29:33.609044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.609051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.609059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.621165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.621616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.621660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.621705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.622236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.622405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.622413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.622419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.622426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.634059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.634466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.634482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.634490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.634657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.634831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.634839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.634846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.634852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.646974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.647361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.647376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.647383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.647542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.647721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.647729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.647736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.647742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.659908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.660288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.660303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.660311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.660479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.660650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.660659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.660665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.660677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.672789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.673208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.673224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.673231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.673400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.673567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.673575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.673582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.673587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.685615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.686010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.686053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.686075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.686659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.687257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.687282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.687303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.687322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.698407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.698780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.698795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.698802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.698962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.699120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.699127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.699137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.699143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.711276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.711666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.711687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.711693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.711853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.712012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.712019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.712025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.712031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.626 [2024-11-26 19:29:33.724250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.626 [2024-11-26 19:29:33.724678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.626 [2024-11-26 19:29:33.724695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.626 [2024-11-26 19:29:33.724702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.626 [2024-11-26 19:29:33.724875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.626 [2024-11-26 19:29:33.725048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.626 [2024-11-26 19:29:33.725056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.626 [2024-11-26 19:29:33.725063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.626 [2024-11-26 19:29:33.725070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.887 [2024-11-26 19:29:33.737284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.887 [2024-11-26 19:29:33.737690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.887 [2024-11-26 19:29:33.737707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.887 [2024-11-26 19:29:33.737714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.887 [2024-11-26 19:29:33.737888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.887 [2024-11-26 19:29:33.738060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.738068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.738074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.738081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.750297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.750699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.750716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.750723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.750890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.751057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.751065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.751071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.751077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.763080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.763466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.763482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.763488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.763647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.763834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.763843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.763849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.763855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.775883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.776298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.776314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.776321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.776489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.776657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.776665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.776677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.776683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.788814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.789182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.789198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.789207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.789366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.789524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.789531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.789537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.789543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.801648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.802048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.802064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.802071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.802239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.802407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.802415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.802420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.802426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.814487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.814891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.814907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.814914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.815082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.815249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.815257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.815263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.815269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.827309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.827663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.827719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.827743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.828326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.828594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.828602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.828608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.828614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.840101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.840516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.840532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.840539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.840713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.840881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.840889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.840895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.840901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.853006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.853398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.853440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.853463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.888 [2024-11-26 19:29:33.854061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.888 [2024-11-26 19:29:33.854648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.888 [2024-11-26 19:29:33.854682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.888 [2024-11-26 19:29:33.854714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.888 [2024-11-26 19:29:33.854721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.888 [2024-11-26 19:29:33.865913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.888 [2024-11-26 19:29:33.866344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.888 [2024-11-26 19:29:33.866360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.888 [2024-11-26 19:29:33.866367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 [2024-11-26 19:29:33.866535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.866706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.889 [2024-11-26 19:29:33.866715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.889 [2024-11-26 19:29:33.866724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.889 [2024-11-26 19:29:33.866730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.889 [2024-11-26 19:29:33.878766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.889 [2024-11-26 19:29:33.879209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.889 [2024-11-26 19:29:33.879253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.889 [2024-11-26 19:29:33.879278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 [2024-11-26 19:29:33.879797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.879966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.889 [2024-11-26 19:29:33.879974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.889 [2024-11-26 19:29:33.879980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.889 [2024-11-26 19:29:33.879986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3899056 Killed "${NVMF_APP[@]}" "$@" 00:28:10.889 [2024-11-26 19:29:33.891685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.889 [2024-11-26 19:29:33.892109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.889 [2024-11-26 19:29:33.892126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.889 [2024-11-26 19:29:33.892133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:10.889 [2024-11-26 19:29:33.892305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.892485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.889 [2024-11-26 19:29:33.892492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.889 [2024-11-26 19:29:33.892499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.889 [2024-11-26 19:29:33.892505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3900443 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3900443 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3900443 ']' 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.889 19:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.889 [2024-11-26 19:29:33.904744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.889 [2024-11-26 19:29:33.905097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.889 [2024-11-26 19:29:33.905114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.889 [2024-11-26 19:29:33.905120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 [2024-11-26 19:29:33.905293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.905465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.889 [2024-11-26 19:29:33.905474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.889 [2024-11-26 19:29:33.905480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.889 [2024-11-26 19:29:33.905486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.889 [2024-11-26 19:29:33.917860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.889 [2024-11-26 19:29:33.918197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.889 [2024-11-26 19:29:33.918213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.889 [2024-11-26 19:29:33.918220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 [2024-11-26 19:29:33.918393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.918566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.889 [2024-11-26 19:29:33.918574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.889 [2024-11-26 19:29:33.918581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.889 [2024-11-26 19:29:33.918587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.889 [2024-11-26 19:29:33.930955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.889 [2024-11-26 19:29:33.931382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.889 [2024-11-26 19:29:33.931398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.889 [2024-11-26 19:29:33.931405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 [2024-11-26 19:29:33.931579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.931759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.889 [2024-11-26 19:29:33.931768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.889 [2024-11-26 19:29:33.931774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.889 [2024-11-26 19:29:33.931784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.889 [2024-11-26 19:29:33.943991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.889 [2024-11-26 19:29:33.944437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.889 [2024-11-26 19:29:33.944454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.889 [2024-11-26 19:29:33.944461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 [2024-11-26 19:29:33.944634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.944815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.889 [2024-11-26 19:29:33.944824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.889 [2024-11-26 19:29:33.944830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.889 [2024-11-26 19:29:33.944836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.889 [2024-11-26 19:29:33.948492] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:28:10.889 [2024-11-26 19:29:33.948529] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.889 [2024-11-26 19:29:33.956999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.889 [2024-11-26 19:29:33.957443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.889 [2024-11-26 19:29:33.957460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.889 [2024-11-26 19:29:33.957467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 [2024-11-26 19:29:33.957642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.957819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.889 [2024-11-26 19:29:33.957828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.889 [2024-11-26 19:29:33.957834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.889 [2024-11-26 19:29:33.957841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.889 [2024-11-26 19:29:33.970079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.889 [2024-11-26 19:29:33.970512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.889 [2024-11-26 19:29:33.970529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.889 [2024-11-26 19:29:33.970536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.889 [2024-11-26 19:29:33.970714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.889 [2024-11-26 19:29:33.970887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.890 [2024-11-26 19:29:33.970896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.890 [2024-11-26 19:29:33.970902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.890 [2024-11-26 19:29:33.970915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.890 [2024-11-26 19:29:33.983072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.890 [2024-11-26 19:29:33.983427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.890 [2024-11-26 19:29:33.983443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.890 [2024-11-26 19:29:33.983450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.890 [2024-11-26 19:29:33.983623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.890 [2024-11-26 19:29:33.983802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.890 [2024-11-26 19:29:33.983811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.890 [2024-11-26 19:29:33.983818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.890 [2024-11-26 19:29:33.983824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.890 [2024-11-26 19:29:33.996035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.890 [2024-11-26 19:29:33.996465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.890 [2024-11-26 19:29:33.996481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:10.890 [2024-11-26 19:29:33.996489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:10.890 [2024-11-26 19:29:33.996662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:10.890 [2024-11-26 19:29:33.996840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.890 [2024-11-26 19:29:33.996849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.890 [2024-11-26 19:29:33.996856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.890 [2024-11-26 19:29:33.996862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.151 [2024-11-26 19:29:34.009056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.151 [2024-11-26 19:29:34.009467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.151 [2024-11-26 19:29:34.009483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.151 [2024-11-26 19:29:34.009491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.151 [2024-11-26 19:29:34.009664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.151 [2024-11-26 19:29:34.009842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.151 [2024-11-26 19:29:34.009850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.151 [2024-11-26 19:29:34.009857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.151 [2024-11-26 19:29:34.009863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.151 [2024-11-26 19:29:34.022018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.151 [2024-11-26 19:29:34.022464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.151 [2024-11-26 19:29:34.022481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.151 [2024-11-26 19:29:34.022488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.151 [2024-11-26 19:29:34.022662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.151 [2024-11-26 19:29:34.022839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.151 [2024-11-26 19:29:34.022848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.151 [2024-11-26 19:29:34.022854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.151 [2024-11-26 19:29:34.022860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.151 [2024-11-26 19:29:34.028572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:11.151 [2024-11-26 19:29:34.035123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.151 [2024-11-26 19:29:34.035560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.151 [2024-11-26 19:29:34.035577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.151 [2024-11-26 19:29:34.035585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.151 [2024-11-26 19:29:34.035764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.151 [2024-11-26 19:29:34.035937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.151 [2024-11-26 19:29:34.035945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.151 [2024-11-26 19:29:34.035952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.151 [2024-11-26 19:29:34.035958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.151 [2024-11-26 19:29:34.048070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.151 [2024-11-26 19:29:34.048521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.151 [2024-11-26 19:29:34.048538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.151 [2024-11-26 19:29:34.048546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.151 [2024-11-26 19:29:34.048725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.151 [2024-11-26 19:29:34.048898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.151 [2024-11-26 19:29:34.048907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.151 [2024-11-26 19:29:34.048914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.151 [2024-11-26 19:29:34.048920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.151 [2024-11-26 19:29:34.061151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.151 [2024-11-26 19:29:34.061490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.151 [2024-11-26 19:29:34.061507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.151 [2024-11-26 19:29:34.061519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.151 [2024-11-26 19:29:34.061699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.152 [2024-11-26 19:29:34.061874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.152 [2024-11-26 19:29:34.061882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.152 [2024-11-26 19:29:34.061888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.152 [2024-11-26 19:29:34.061894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.152 [2024-11-26 19:29:34.070589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.152 [2024-11-26 19:29:34.070611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.152 [2024-11-26 19:29:34.070618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.152 [2024-11-26 19:29:34.070624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.152 [2024-11-26 19:29:34.070629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.152 [2024-11-26 19:29:34.071984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.152 [2024-11-26 19:29:34.072094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.152 [2024-11-26 19:29:34.072095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.152 [2024-11-26 19:29:34.074218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.152 [2024-11-26 19:29:34.074632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.152 [2024-11-26 19:29:34.074650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.152 [2024-11-26 19:29:34.074658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.152 [2024-11-26 19:29:34.074838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.152 [2024-11-26 19:29:34.075012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.152 [2024-11-26 19:29:34.075020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.152 [2024-11-26 19:29:34.075027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.152 [2024-11-26 19:29:34.075033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.152 [2024-11-26 19:29:34.087256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.152 [2024-11-26 19:29:34.087708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.152 [2024-11-26 19:29:34.087729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.152 [2024-11-26 19:29:34.087737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.152 [2024-11-26 19:29:34.087912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.152 [2024-11-26 19:29:34.088089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.152 [2024-11-26 19:29:34.088097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.152 [2024-11-26 19:29:34.088110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.152 [2024-11-26 19:29:34.088117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.152 [2024-11-26 19:29:34.100310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.152 [2024-11-26 19:29:34.100693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.152 [2024-11-26 19:29:34.100713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.152 [2024-11-26 19:29:34.100721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.152 [2024-11-26 19:29:34.100896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.152 [2024-11-26 19:29:34.101070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.152 [2024-11-26 19:29:34.101078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.152 [2024-11-26 19:29:34.101085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.152 [2024-11-26 19:29:34.101092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.152 [2024-11-26 19:29:34.113295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.152 [2024-11-26 19:29:34.113743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.152 [2024-11-26 19:29:34.113762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.152 [2024-11-26 19:29:34.113771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.152 [2024-11-26 19:29:34.113945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.152 [2024-11-26 19:29:34.114119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.152 [2024-11-26 19:29:34.114127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.152 [2024-11-26 19:29:34.114134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.152 [2024-11-26 19:29:34.114140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.152 [2024-11-26 19:29:34.126340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.152 [2024-11-26 19:29:34.126767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.152 [2024-11-26 19:29:34.126787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.152 [2024-11-26 19:29:34.126796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.152 [2024-11-26 19:29:34.126970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.152 [2024-11-26 19:29:34.127143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.152 [2024-11-26 19:29:34.127151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.152 [2024-11-26 19:29:34.127158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.152 [2024-11-26 19:29:34.127165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.152 [2024-11-26 19:29:34.139392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.152 [2024-11-26 19:29:34.139844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.152 [2024-11-26 19:29:34.139861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.152 [2024-11-26 19:29:34.139869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.152 [2024-11-26 19:29:34.140043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.152 [2024-11-26 19:29:34.140216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.152 [2024-11-26 19:29:34.140225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.152 [2024-11-26 19:29:34.140231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.152 [2024-11-26 19:29:34.140238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.152 [2024-11-26 19:29:34.152441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.152 [2024-11-26 19:29:34.152844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.152 [2024-11-26 19:29:34.152861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.153 [2024-11-26 19:29:34.152868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.153 [2024-11-26 19:29:34.153042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.153 [2024-11-26 19:29:34.153214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.153 [2024-11-26 19:29:34.153223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.153 [2024-11-26 19:29:34.153229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.153 [2024-11-26 19:29:34.153235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.153 [2024-11-26 19:29:34.165435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.153 [2024-11-26 19:29:34.165861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.153 [2024-11-26 19:29:34.165878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.153 [2024-11-26 19:29:34.165887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.153 [2024-11-26 19:29:34.166062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.153 [2024-11-26 19:29:34.166235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.153 [2024-11-26 19:29:34.166243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.153 [2024-11-26 19:29:34.166249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.153 [2024-11-26 19:29:34.166255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.153 [2024-11-26 19:29:34.178480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.153 [2024-11-26 19:29:34.178931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.153 [2024-11-26 19:29:34.178948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.153 [2024-11-26 19:29:34.178956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.153 [2024-11-26 19:29:34.179129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.153 [2024-11-26 19:29:34.179303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.153 [2024-11-26 19:29:34.179312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.153 [2024-11-26 19:29:34.179318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.153 [2024-11-26 19:29:34.179325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.153 [2024-11-26 19:29:34.191531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.153 [2024-11-26 19:29:34.191870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.153 [2024-11-26 19:29:34.191888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.153 [2024-11-26 19:29:34.191895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.153 [2024-11-26 19:29:34.192069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.153 [2024-11-26 19:29:34.192243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.153 [2024-11-26 19:29:34.192251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.153 [2024-11-26 19:29:34.192257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.153 [2024-11-26 19:29:34.192264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.153 [2024-11-26 19:29:34.204624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.153 [2024-11-26 19:29:34.204984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.153 [2024-11-26 19:29:34.205001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.153 [2024-11-26 19:29:34.205008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.153 [2024-11-26 19:29:34.205180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.153 [2024-11-26 19:29:34.205353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.153 [2024-11-26 19:29:34.205361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.153 [2024-11-26 19:29:34.205367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.153 [2024-11-26 19:29:34.205376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.153 [2024-11-26 19:29:34.208173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.153 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.153 [2024-11-26 19:29:34.217732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.153 [2024-11-26 19:29:34.218112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.153 [2024-11-26 19:29:34.218129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.153 [2024-11-26 19:29:34.218136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.153 [2024-11-26 19:29:34.218309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.153 [2024-11-26 19:29:34.218483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.153 [2024-11-26 19:29:34.218491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.153 [2024-11-26 19:29:34.218497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.153 [2024-11-26 19:29:34.218503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.153 [2024-11-26 19:29:34.230715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.153 [2024-11-26 19:29:34.231143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.153 [2024-11-26 19:29:34.231159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.153 [2024-11-26 19:29:34.231166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.153 [2024-11-26 19:29:34.231339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.154 [2024-11-26 19:29:34.231513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.154 [2024-11-26 19:29:34.231520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.154 [2024-11-26 19:29:34.231527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.154 [2024-11-26 19:29:34.231533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.154 [2024-11-26 19:29:34.243754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.154 Malloc0 00:28:11.154 [2024-11-26 19:29:34.244182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.154 [2024-11-26 19:29:34.244199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.154 [2024-11-26 19:29:34.244206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.154 [2024-11-26 19:29:34.244379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.154 [2024-11-26 19:29:34.244552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.154 [2024-11-26 19:29:34.244564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.154 [2024-11-26 19:29:34.244571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.154 [2024-11-26 19:29:34.244577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.154 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.154 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.154 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.154 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.154 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.154 [2024-11-26 19:29:34.256841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.154 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:11.154 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.154 [2024-11-26 19:29:34.257256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.154 [2024-11-26 19:29:34.257273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a510 with addr=10.0.0.2, port=4420 00:28:11.154 [2024-11-26 19:29:34.257281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a510 is same with the state(6) to be set 00:28:11.154 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.154 [2024-11-26 19:29:34.257455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a510 (9): Bad file descriptor 00:28:11.154 [2024-11-26 19:29:34.257629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.154 [2024-11-26 19:29:34.257638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.154 [2024-11-26 19:29:34.257644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.154 [2024-11-26 19:29:34.257650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.414 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.414 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.414 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.414 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.414 [2024-11-26 19:29:34.267985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.414 [2024-11-26 19:29:34.269891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.414 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.414 19:29:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3899491 00:28:11.414 4750.67 IOPS, 18.56 MiB/s [2024-11-26T18:29:34.528Z] [2024-11-26 19:29:34.455273] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:13.292 5574.57 IOPS, 21.78 MiB/s [2024-11-26T18:29:37.786Z] 6308.00 IOPS, 24.64 MiB/s [2024-11-26T18:29:38.725Z] 6876.44 IOPS, 26.86 MiB/s [2024-11-26T18:29:39.664Z] 7310.50 IOPS, 28.56 MiB/s [2024-11-26T18:29:40.604Z] 7686.36 IOPS, 30.02 MiB/s [2024-11-26T18:29:41.543Z] 7986.92 IOPS, 31.20 MiB/s [2024-11-26T18:29:42.481Z] 8247.38 IOPS, 32.22 MiB/s [2024-11-26T18:29:43.421Z] 8473.79 IOPS, 33.10 MiB/s [2024-11-26T18:29:43.681Z] 8677.13 IOPS, 33.90 MiB/s 00:28:20.567 Latency(us) 00:28:20.567 [2024-11-26T18:29:43.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.567 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:20.567 Verification LBA range: start 0x0 length 0x4000 00:28:20.567 Nvme1n1 : 15.05 8656.77 33.82 11496.93 0.00 6315.01 434.96 40694.74 00:28:20.567 [2024-11-26T18:29:43.681Z] =================================================================================================================== 00:28:20.567 [2024-11-26T18:29:43.681Z] Total : 8656.77 33.82 11496.93 0.00 6315.01 434.96 40694.74 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.567 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.567 rmmod nvme_tcp 00:28:20.567 rmmod nvme_fabrics 00:28:20.567 rmmod nvme_keyring 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3900443 ']' 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3900443 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3900443 ']' 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3900443 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900443 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900443' 00:28:20.827 killing process with pid 3900443 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3900443 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3900443 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.827 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.086 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.086 19:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.992 19:29:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.992 00:28:22.992 real 0m26.282s 00:28:22.992 user 1m1.523s 00:28:22.992 sys 0m6.818s 00:28:22.992 19:29:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.992 19:29:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:22.992 ************************************ 00:28:22.992 END TEST nvmf_bdevperf 00:28:22.992 ************************************ 00:28:22.992 19:29:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:22.992 19:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:22.992 19:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.992 19:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.992 ************************************ 00:28:22.992 START TEST nvmf_target_disconnect 00:28:22.992 ************************************ 00:28:22.992 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:23.251 * Looking for test storage... 00:28:23.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:23.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.251 --rc genhtml_branch_coverage=1 00:28:23.251 --rc genhtml_function_coverage=1 00:28:23.251 --rc genhtml_legend=1 00:28:23.251 --rc geninfo_all_blocks=1 00:28:23.251 --rc geninfo_unexecuted_blocks=1 00:28:23.251 00:28:23.251 ' 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:23.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.251 --rc genhtml_branch_coverage=1 00:28:23.251 --rc genhtml_function_coverage=1 00:28:23.251 --rc genhtml_legend=1 00:28:23.251 --rc geninfo_all_blocks=1 00:28:23.251 --rc geninfo_unexecuted_blocks=1 00:28:23.251 00:28:23.251 ' 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:23.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.251 --rc genhtml_branch_coverage=1 00:28:23.251 --rc genhtml_function_coverage=1 00:28:23.251 --rc genhtml_legend=1 00:28:23.251 --rc geninfo_all_blocks=1 00:28:23.251 --rc geninfo_unexecuted_blocks=1 00:28:23.251 00:28:23.251 ' 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:23.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.251 --rc genhtml_branch_coverage=1 00:28:23.251 --rc genhtml_function_coverage=1 00:28:23.251 --rc genhtml_legend=1 00:28:23.251 --rc geninfo_all_blocks=1 00:28:23.251 --rc geninfo_unexecuted_blocks=1 00:28:23.251 00:28:23.251 ' 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.251 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:23.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.252 19:29:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:29.825 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:29.825 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:29.825 Found net devices under 0000:86:00.0: cvl_0_0 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:29.825 Found net devices under 0000:86:00.1: cvl_0_1 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.825 19:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.825 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.825 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.825 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.825 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.825 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.825 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.825 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:29.825 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:29.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:28:29.826 00:28:29.826 --- 10.0.0.2 ping statistics --- 00:28:29.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.826 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:29.826 00:28:29.826 --- 10.0.0.1 ping statistics --- 00:28:29.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.826 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.826 ************************************ 00:28:29.826 START TEST nvmf_target_disconnect_tc1 00:28:29.826 ************************************ 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.826 [2024-11-26 19:29:52.379907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.826 [2024-11-26 19:29:52.379955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1decac0 with addr=10.0.0.2, port=4420 00:28:29.826 [2024-11-26 19:29:52.379992] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:29.826 [2024-11-26 19:29:52.380005] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:29.826 [2024-11-26 19:29:52.380012] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:29.826 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:29.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:29.826 Initializing NVMe Controllers 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:29.826 00:28:29.826 real 0m0.120s 00:28:29.826 user 0m0.054s 00:28:29.826 sys 0m0.062s 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.826 ************************************ 00:28:29.826 END TEST nvmf_target_disconnect_tc1 00:28:29.826 ************************************ 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.826 ************************************ 00:28:29.826 START TEST nvmf_target_disconnect_tc2 00:28:29.826 ************************************ 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3905609 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3905609 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3905609 ']' 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.826 19:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.826 [2024-11-26 19:29:52.520082] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:28:29.826 [2024-11-26 19:29:52.520120] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.826 [2024-11-26 19:29:52.599403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.826 [2024-11-26 19:29:52.640914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.826 [2024-11-26 19:29:52.640952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.826 [2024-11-26 19:29:52.640959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.826 [2024-11-26 19:29:52.640965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.826 [2024-11-26 19:29:52.640970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.826 [2024-11-26 19:29:52.642478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:29.826 [2024-11-26 19:29:52.642702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.826 [2024-11-26 19:29:52.642584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:29.826 [2024-11-26 19:29:52.642703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.395 Malloc0 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.395 [2024-11-26 19:29:53.439861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.395 [2024-11-26 19:29:53.468914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3905809 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:30.395 19:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:32.954 19:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3905609 00:28:32.954 19:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 [2024-11-26 19:29:55.504209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 [2024-11-26 19:29:55.504406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Read completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.954 starting I/O failed 00:28:32.954 Write completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Write completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Write completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Write completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Write completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Write completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Read completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 Write completed with error (sct=0, sc=8) 00:28:32.955 starting I/O failed 00:28:32.955 [2024-11-26 19:29:55.504605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.955 [2024-11-26 19:29:55.504934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.504959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.505132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.505142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.505292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.505302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.505508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.505519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.505606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.505615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.505700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.505710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.505814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.505834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.506000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.506018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.506170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.506181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.506361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.506393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.506583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.506615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.506850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.506883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.507072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.507104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.507291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.507323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.507611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.507643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.507847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.507879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.508138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.508169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.508306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.508338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.508516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.508548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.508739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.508749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.508905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.508915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.509043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.509075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.509311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.509343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.509581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.509612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.509813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.955 [2024-11-26 19:29:55.509847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.955 qpair failed and we were unable to recover it. 00:28:32.955 [2024-11-26 19:29:55.510050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.510083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.510342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.510353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.510501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.510511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.510654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.510664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.510747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.510757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.510883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.510894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.510981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.510990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.511058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.511068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.511227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.511238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.511461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.511471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.511612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.511622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.511827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.511837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.512087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.512119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.512374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.512405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.512599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.512629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.512849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.512881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.513018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.513049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.513276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.513307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.513489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.513519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.513746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.513760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.513910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.513922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.514080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.514096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.514203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.514216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.514308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.514321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.514541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.514555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.514754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.514767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.516599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.516635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.516893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.516925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.517113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.517144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.517388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.517419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.517691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.517724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.517917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.517948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.518071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.518102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.518359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.518372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.518535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.956 [2024-11-26 19:29:55.518547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.956 qpair failed and we were unable to recover it. 00:28:32.956 [2024-11-26 19:29:55.518652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.518665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.518837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.518850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.519001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.519013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.519141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.519155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.519248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.519260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.519406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.519420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.519658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.519697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.519903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.519933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.520056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.520086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.520271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.520301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.520512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.520543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.520709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.520740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.520998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.521029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.521161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.521196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.521447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.521478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.521722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.521736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.521880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.521893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.522035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.522048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.522293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.522307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.522533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.522552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.522712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.522730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.522881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.522919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.523179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.523209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.523528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.523559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.523758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.523789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.523988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.524020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.524194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.524224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.524438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.524469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.524724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.957 [2024-11-26 19:29:55.524743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.957 qpair failed and we were unable to recover it. 00:28:32.957 [2024-11-26 19:29:55.524948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.524966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.525123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.525141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.525280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.525298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.525438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.525455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.525612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.525629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.525714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.525731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.525942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.525959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.526170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.526206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.526481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.526512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.526698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.526730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.526916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.526945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.527145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.527175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.527482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.527513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.527803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.527835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.528110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.528140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.528247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.528277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.528477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.528507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.528618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.528648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.528879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.528897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.529153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.529170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.529415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.529433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.529666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.529695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.529837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.529855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.530086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.530103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.530320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.530360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.530607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.530637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.530888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.530920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.531166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.531197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.531389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.531421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.531628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.958 [2024-11-26 19:29:55.531645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.958 qpair failed and we were unable to recover it. 00:28:32.958 [2024-11-26 19:29:55.531848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.531867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.532036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.532053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.532279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.532320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.532579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.532609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.532826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.532858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.533038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.533062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.533172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.533197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.533424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.533449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.533620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.533644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.533763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.533789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.534024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.534048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.534208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.534232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.534387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.534411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.534589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.534619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.534830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.534863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.535060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.535089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.535201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.535232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.535423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.535454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.535574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.535603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.959 qpair failed and we were unable to recover it. 00:28:32.959 [2024-11-26 19:29:55.535783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.959 [2024-11-26 19:29:55.535807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.536049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.536079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.536215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.536247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.536491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.536523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.536806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.536831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.537018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.537042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.537160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.537184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.537354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.537378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.537559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.537583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.537735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.537760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.537986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.538010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.538260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.538283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.538522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.538546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.538741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.538766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.538940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.538963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.539121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.539149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.539262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.539286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.539463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.539487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.539652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.539693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.539874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.539905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.540089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.540120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.540367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.540398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.540700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.540733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.540867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.540897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.541179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.541210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.541389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.541420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.541552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.541581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.541749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.541780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.541921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.541953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.542140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.542172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.542416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.542446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.542628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.542653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.542767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.960 [2024-11-26 19:29:55.542791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.960 qpair failed and we were unable to recover it. 00:28:32.960 [2024-11-26 19:29:55.542943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.542967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.543159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.543190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.543477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.543507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.543770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.543802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.544040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.544071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.544251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.544281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.544463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.544494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.544792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.544825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.545030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.545060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.545336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.545368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.545548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.545579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.545839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.545871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.546155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.546185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.546483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.546515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.546753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.546785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.547071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.547101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.547351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.547383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.547691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.547724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.547897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.547929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.548111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.548142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.548394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.548425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.548691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.548724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.548907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.548943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.549201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.549233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.549415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.549446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.549625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.549656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.549780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.549810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.550068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.550097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.550390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.550420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.550709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.550741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.550924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.550954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.551206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.551237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.551496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.551526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.551808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.961 [2024-11-26 19:29:55.551840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.961 qpair failed and we were unable to recover it. 00:28:32.961 [2024-11-26 19:29:55.552119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.552149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.552330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.552360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.552628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.552660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.552946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.552977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.553106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.553137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.553394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.553424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.553708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.553741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.554019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.554050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.554296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.554327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.554592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.554624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.554923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.554954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.555122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.555151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.555389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.555420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.555625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.555656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.555851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.555882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.556153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.556183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.556419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.556451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.556666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.556706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.556902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.556933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.557178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.557207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.557390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.557420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.557656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.557697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.557886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.557917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.558153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.558183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.558420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.558451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.558638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.558668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.558915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.558946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.559133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.559163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.559345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.559380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.559619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.559650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.559794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.559825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.559995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.560026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.560197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.560228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.560505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.560536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.560705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.560737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.560973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.561003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.561264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.561295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.561477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.561508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.962 qpair failed and we were unable to recover it. 00:28:32.962 [2024-11-26 19:29:55.561771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.962 [2024-11-26 19:29:55.561803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.562042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.562072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.562308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.562340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.562528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.562558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.562753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.562785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.562956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.562985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.563233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.563263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.563501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.563531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.563724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.563755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.564014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.564045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.564230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.564260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.564521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.564551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.564844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.564877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.565149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.565181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.565431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.565462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.565720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.565752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.565991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.566021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.566215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.566246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.566451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.566481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.566700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.566733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.566995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.567028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.567213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.567244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.567437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.567467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.567734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.567767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.568015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.568046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.568310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.568341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.568581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.568611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.568854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.568886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.569173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.569204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.569394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.569424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.569607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.569644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.569827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.569859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.570040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.570071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.570338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.570368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.570571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.570602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.570865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.570897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.571137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.571167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.571433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.571464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.963 [2024-11-26 19:29:55.571663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.963 [2024-11-26 19:29:55.571703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.963 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.571930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.571961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.572159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.572190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.572376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.572406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.572668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.572709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.572890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.572922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.573208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.573240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.573424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.573455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.573638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.573677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.573851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.573882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.574170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.574201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.574465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.574497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.574615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.574645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.574855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.574886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.575083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.575114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.575366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.575398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.575693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.575726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.576032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.576063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.576179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.576210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.576401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.576432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.576640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.576678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.576938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.576969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.577149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.577180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.577444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.577475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.577715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.577747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.577923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.577954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.578214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.578244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.578433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.578464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.578728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.578762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.578947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.578977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.579240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.579271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.579453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.579484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.579696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.579735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.579997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.964 [2024-11-26 19:29:55.580029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.964 qpair failed and we were unable to recover it. 00:28:32.964 [2024-11-26 19:29:55.580275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.580306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.580512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.580543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.580795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.580827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.581106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.581137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.581419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.581450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.581638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.581677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.581850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.581881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.582126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.582157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.582391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.582422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.582679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.582711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.582920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.582951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.583147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.583178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.583358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.583389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.583513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.583543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.583800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.583832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.584132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.584164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.584349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.584380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.584620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.584651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.584802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.584833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.585141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.585172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.585354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.585384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.585619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.585651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.585862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.586075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.586106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.586318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.586349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.586617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.586648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.586918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.586950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.587193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.587224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.587417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.587449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.587627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.587657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.587932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.587963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.588205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.588236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.588448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.588479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.588741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.588774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.589061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.589091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.589271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.589302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.589482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.589513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.965 qpair failed and we were unable to recover it. 00:28:32.965 [2024-11-26 19:29:55.589702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.965 [2024-11-26 19:29:55.589734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.589995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.590032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.590169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.590198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.590437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.590467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.590666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.590715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.590976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.591007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.591144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.591175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.591293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.591325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.591583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.591615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.591882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.591913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.592202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.592233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.592509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.592540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.592825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.592857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.593032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.593063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.593302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.593334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.593515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.593546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.593726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.593759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.594016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.594047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.594217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.594248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.594415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.594446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.594709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.594743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.594925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.594957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.595220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.595252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.595438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.595469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.595761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.595795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.596001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.596032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.596245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.596277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.596464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.596495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.596687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.596720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.596906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.596937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.597146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.597177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.597362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.597393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.597655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.597697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.597803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.597835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.598097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.598128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.598371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.598402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.598585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.598616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.598875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.598907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.966 qpair failed and we were unable to recover it. 00:28:32.966 [2024-11-26 19:29:55.599110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.966 [2024-11-26 19:29:55.599141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.599319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.599350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.599534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.599565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.599751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.599790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.599977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.600008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.600193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.600225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.600429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.600459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.600704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.600738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.600935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.600966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.601156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.601188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.601449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.601479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.601749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.601782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.602030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.602061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.602301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.602332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.602599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.602631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.602809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.602844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.603035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.603066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.603336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.603368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.603555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.603585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.603791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.603825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.604090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.604122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.604258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.604290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.604461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.604491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.604714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.604747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.604923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.604955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.605234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.605267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.605539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.605569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.605761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.605795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.605905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.605936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.606207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.606239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.606448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.606480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.606609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.606641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.606916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.606948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.607233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.607264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.607538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.607569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.607851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.607884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.608060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.608091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.967 [2024-11-26 19:29:55.608282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.967 [2024-11-26 19:29:55.608314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.967 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.608435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.608465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.608734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.608767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.609042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.609072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.609265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.609297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.609427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.609458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.609746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.609785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.610010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.610041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.610228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.610258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.610430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.610461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.610656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.610707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.610991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.611021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.611285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.611316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.611573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.611604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.611778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.611811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.612055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.612086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.612268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.612300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.612516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.612547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.612813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.612845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.612969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.613001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.613221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.613253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.613424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.613455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.613697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.613729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.613987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.614018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.614232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.614264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.614455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.614486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.614663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.614712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.614887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.614936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.615208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.615239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.615454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.615487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.615737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.615771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.616036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.616067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.616322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.616354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.616605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.616637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.616847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.616880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.617148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.617179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.617470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.617502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.617777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.617810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.968 [2024-11-26 19:29:55.618007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-26 19:29:55.618039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.968 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.618286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.618317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.618592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.618623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.618926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.618959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.619207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.619238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.619538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.619569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.619840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.619874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.620117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.620148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.620337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.620374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.620586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.620616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.620920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.620953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.621215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.621246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.621438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.621470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.621683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.621716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.621913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.621945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.622118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.622149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.622411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.622442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.622712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.622745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.623040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.623071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.623357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.623390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.623632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.623663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.623954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.623985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.624261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.624293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.624480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.624511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.624691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.624725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.624972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.625022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.625299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.625331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.625468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.625500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.625747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.625780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.626047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.626078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.626405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.626438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.626712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.626745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.627035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.627066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.969 [2024-11-26 19:29:55.627192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-26 19:29:55.627223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.969 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.627473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.627504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.627712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.627747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.627883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.627914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.628184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.628216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.628461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.628492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.628768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.628801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.629071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.629102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.629345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.629377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.629566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.629597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.629890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.629924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.630215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.630246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.630491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.630522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.630787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.630821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.631082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.631113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.631312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.631349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.631531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.631562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.631764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.631797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.632013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.632044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.632360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.632392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.632643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.632682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.632949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.632980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.633254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.633284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.633481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.633513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.633771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.633805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.633984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.634014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.634283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.634314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.634447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.634478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.634667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.634721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.634999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.635030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.635229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.635260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.635514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.635546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.635796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.635828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.636121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.636153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.636427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.636458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.636653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.636693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.636900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.636932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.637200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-26 19:29:55.637231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.970 qpair failed and we were unable to recover it. 00:28:32.970 [2024-11-26 19:29:55.637368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.637399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.637647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.637690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.637871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.637902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.638048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.638078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.638269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.638300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.638477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.638508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.638756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.638788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.639040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.639072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.639266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.639297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.639474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.639505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.639776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.639809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.639963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.639994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.640241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.640271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.640543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.640574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.640712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.640744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.640954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.640984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.641257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.641288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.641568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.641605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.641785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.641817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.642087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.642118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.642447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.642736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.642769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.643043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.643074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.643364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.643396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.643698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.643730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.643996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.644027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.644327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.644358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.644628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.644659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.644863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.644894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.645098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.645129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.645340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.645371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.645576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.645608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.645907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.645940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.646166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.646197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.646339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.646371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.646647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.646705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.646885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.646917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.971 qpair failed and we were unable to recover it. 00:28:32.971 [2024-11-26 19:29:55.647166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.971 [2024-11-26 19:29:55.647198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.647401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.647432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.647710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.647744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.647940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.647971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.648167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.648200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.648344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.648376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.648644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.648683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.648967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.648999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.649277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.649308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.649620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.649651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.649901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.649933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.650114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.650144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.650388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.650420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.650614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.650646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.650905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.650936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.651130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.651161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.651354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.651385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.651639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.651681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.651974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.652005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.652268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.652299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.652586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.652618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.652826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.652859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.653153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.653184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.653440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.653471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.653682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.653714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.653978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.654009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.654214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.654246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.654507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.654538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.654734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.654768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.654992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.655024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.655276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.655307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.655575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.655607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.655864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.655897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.656149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.656180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.656386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.656418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.656695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.656728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.656923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.656955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.972 [2024-11-26 19:29:55.657175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.972 [2024-11-26 19:29:55.657207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.972 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.657476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.657508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.657811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.658128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.658159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.658352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.658383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.658565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.658597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.658794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.658828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.659106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.659137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.659340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.659372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.659497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.659528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.659741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.659781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.659908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.659940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.660257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.660289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.660570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.660602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.660797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.660830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.661087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.661118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.661249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.661281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.661563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.661595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.661791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.661823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.662082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.662113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.662307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.662340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.662593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.662624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.662813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.662846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.663099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.663130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.663432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.663464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.663693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.663726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.663948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.664186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.664218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.664489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.664521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.664805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.664840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.665041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.665072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.665211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.665243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.665494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.665525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.665775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.665809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.973 qpair failed and we were unable to recover it. 00:28:32.973 [2024-11-26 19:29:55.666063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.973 [2024-11-26 19:29:55.666095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.666286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.666318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.666597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.666629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.666940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.666973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.667231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.667262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.667564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.667596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.667865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.667899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.668150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.668181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.668428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.668460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.668730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.668765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.668973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.669004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.669308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.669341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.669607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.669638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.669925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.669958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.670175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.670206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.670339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.670371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.670590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.670627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.670917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.670950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.671170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.671201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.671419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.671451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.671722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.671755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.671886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.671918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.672186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.672219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.672345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.672376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.672578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.672609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.672743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.672775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.672988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.673020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.673208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.673239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.673447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.673479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.673728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.673762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.674071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.674102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.674365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.674397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.674705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.674738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.674999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.675032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.675318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.675350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.675631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.675662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.974 [2024-11-26 19:29:55.675945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.974 [2024-11-26 19:29:55.675977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.974 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.676266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.676298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.676551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.676583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.676720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.676754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.677032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.677064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.677260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.677292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.677544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.677575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.677775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.677809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.678089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.678120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.678398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.678430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.678620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.678651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.678901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.678932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.679228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.679260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.679452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.679484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.679613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.679644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.679931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.679963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.680214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.680245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.680449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.680481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.680758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.680792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.681078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.681109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.681374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.681417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.681613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.681644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.681850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.681883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.682156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.682187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.682387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.682419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.682556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.682587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.682837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.682871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.682999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.683031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.683220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.683251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.683525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.683557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.683742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.683776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.683886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.683916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.684135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.684166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.684349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.684380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.684567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.684599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.684792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.684825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.685103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.685135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.685384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.685416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.975 qpair failed and we were unable to recover it. 00:28:32.975 [2024-11-26 19:29:55.685617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.975 [2024-11-26 19:29:55.685648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.685913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.685946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.686245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.686276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.686489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.686520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.686700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.686734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.687010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.687042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.687240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.687271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.687543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.687574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.687875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.687908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.688172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.688204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.688509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.688541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.688687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.688720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.689019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.689051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.689259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.689290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.689566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.689598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.689881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.689914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.690137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.690168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.690379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.690411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.690592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.690624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.690828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.690862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.690988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.691020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.691257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.691288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.691587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.691624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.691905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.691939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.692149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.692180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.692459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.692490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.692629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.692661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.692927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.692958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.693239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.693271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.693398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.693430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.693703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.693736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.693921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.693952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.694098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.694129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.694334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.694366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.694646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.694703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.694969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.695001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.695279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.695311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.695455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.976 [2024-11-26 19:29:55.695487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.976 qpair failed and we were unable to recover it. 00:28:32.976 [2024-11-26 19:29:55.695787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.695821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.696105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.696136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.696368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.696399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.696652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.696694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.696918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.696949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.697223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.697255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.697440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.697471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.697742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.697775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.698072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.698103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.698373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.698404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.698533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.698564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.698797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.698830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.699104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.699136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.699429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.699461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.699758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.699792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.700010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.700042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.700294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.700326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.700515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.700546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.700730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.700763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.701036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.701068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.701266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.701297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.701489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.701521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.701701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.701735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.701946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.701977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.702253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.702291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.702476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.702508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.702776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.702809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.703085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.703117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.703409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.703440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.703724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.703757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.704002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.704033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.704292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.704323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.704586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.704617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.704822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.704854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.705132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.705163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.705370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.705401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.705685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.705719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.977 qpair failed and we were unable to recover it. 00:28:32.977 [2024-11-26 19:29:55.705999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.977 [2024-11-26 19:29:55.706030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.706310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.706342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.706598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.706631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.706861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.706894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.707004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.707035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.707254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.707285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.707552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.707583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.707826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.707861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.708137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.708169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.708449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.708480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.708747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.708781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.708964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.708996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.709187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.709218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.709489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.709520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.709818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.709852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.710118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.710150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.710445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.710477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.710662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.710712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.710898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.710929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.711204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.711236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.711511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.711542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.711693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.711726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.712004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.712037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.712285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.712316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.712451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.712482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.712684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.712717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.712916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.712948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.713133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.713170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.713425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.978 [2024-11-26 19:29:55.713458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.978 qpair failed and we were unable to recover it. 00:28:32.978 [2024-11-26 19:29:55.713754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.713787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.713992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.714023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.714202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.714234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.714510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.714541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.714765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.714799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.714917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.714949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.715167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.715198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.715472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.715503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.715803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.715836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.716106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.716138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.716398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.716429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.716686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.716718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.716939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.716971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.717150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.717181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.717408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.717439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.717723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.717757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.717898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.717930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.718230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.718261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.718532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.718563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.718769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.718803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.718990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.719021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.719295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.719327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.719606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.719637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.719947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.719981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.720183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.720214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.720475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.720507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.720701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.720734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.720935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.720967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.721167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.721198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.979 [2024-11-26 19:29:55.721321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.979 [2024-11-26 19:29:55.721352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.979 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.721617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.721649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.721862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.721894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.722077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.722108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.722312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.722343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.722619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.722651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.722982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.723013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.723306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.723337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.723615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.723647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.723854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.723892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.724166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.724197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.724471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.724502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.724798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.724833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.725105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.725137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.725425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.725456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.725738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.725772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.726015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.726047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.726258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.726290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.726503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.726535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.726761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.726795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.727070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.727101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.727388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.727419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.727701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.727734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.727983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.728016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.728220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.728252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.728501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.728532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.728715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.728748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.729049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.729081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.729278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.729311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.729535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.729566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.729818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.729852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.730080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.730111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.730361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.730393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.730643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.730684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.730906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.730937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.731214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.731246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.731535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.731568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.980 [2024-11-26 19:29:55.731750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.980 [2024-11-26 19:29:55.731782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.980 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.731990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.732021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.732220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.732252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.732523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.732555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.732750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.732784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.733031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.733062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.733248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.733280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.733578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.733609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.733879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.733912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.734110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.734141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.734336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.734368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.734572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.734603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.734747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.734791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.735070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.735102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.735349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.735381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.735651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.735693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.735893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.735925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.736119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.736150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.736402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.736433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.736708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.736742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.736941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.736972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.737222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.737254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.737526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.737558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.737772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.737806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.738057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.738088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.738271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.738301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.738510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.738543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.738798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.738831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.739124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.739155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.739353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.739384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.739574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.739607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.739880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.739912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.740113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.740145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.740401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.740432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.740543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.740574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.740761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.740795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.741003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.741035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.741364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-11-26 19:29:55.741394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.981 qpair failed and we were unable to recover it. 00:28:32.981 [2024-11-26 19:29:55.741666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.741716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.741995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.742027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.742298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.742329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.742625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.742656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.742934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.742966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.743242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.743273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.743561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.743593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.743788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.743821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.744025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.744056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.744306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.744337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.744522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.744553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.744822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.744854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.745068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.745099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.745348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.745380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.745691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.745728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.746009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.746041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.746182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.746214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.746507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.746539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.746839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.746873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.747138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.747170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.747363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.747395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.747668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.747709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.747905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.747936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.748136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.748167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.748437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.748469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.748686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.748718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.748996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.749028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.749231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.749262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.749519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.749551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.749823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.749857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.750046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.750077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.750329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.750361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.750611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.750643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.750843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.750875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.751136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.751167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.751384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.751416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.751663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-11-26 19:29:55.751703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.982 qpair failed and we were unable to recover it. 00:28:32.982 [2024-11-26 19:29:55.751895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.751928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.752208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.752240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.752455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.752487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.752700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.752733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.752941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.752972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.753168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.753200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.753385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.753417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.753610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.753642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.753912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.753945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.754143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.754174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.754437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.754469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.754693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.754726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.755002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.755035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.755234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.755265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.755521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.755552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.755688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.755721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.755974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.756007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.756308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.756345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.756632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.756665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.756972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.757003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.757215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.757247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.757499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.757530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.757723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.757756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.757961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.757993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.758171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.758204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.758400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.758433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.758721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.758755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.758972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.759003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.759252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.759285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.759427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.759459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.759752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.759785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.759975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.760006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.983 [2024-11-26 19:29:55.760270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-11-26 19:29:55.760302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.983 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.760502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.760533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.760680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.760714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.760920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.760951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.761138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.761169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.761367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.761399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.761576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.761608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.761798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.761832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.762109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.762141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.762351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.762382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.762574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.762605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.762871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.762904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.763114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c57b20 is same with the state(6) to be set 00:28:32.984 [2024-11-26 19:29:55.763501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.763550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.763789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.763826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.764088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.764121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.764379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.764411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.764553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.764585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.764810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.764845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.764984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.765016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.765220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.765252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.765523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.765555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.765835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.765872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.766153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.766186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.766311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.766343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.766614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.766647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.766869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.766902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.767184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.767216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.767505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.767538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.767737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.767771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.767958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.767990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.768186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.768218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.768476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.768509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.768648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.768687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.768974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.769006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.769205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.769239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.769416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.769448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.769644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.769685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.769961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.984 [2024-11-26 19:29:55.769994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.984 qpair failed and we were unable to recover it. 00:28:32.984 [2024-11-26 19:29:55.770138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.770170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.770369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.770402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.770605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.770638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.770783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.770817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.771140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.771173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.771440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.771473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.771760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.771794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.771997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.772029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.772321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.772354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.772596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.772628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.772764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.772797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.772979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.773011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.773143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.773175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.773437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.773469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.773790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.773830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.774033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.774066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.774256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.774289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.774490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.774522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.774802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.774837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.775105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.775137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.775331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.775363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.775640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.775683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.775896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.775927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.776121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.776154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.776355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.776388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.776515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.776547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.776819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.776853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.777128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.777160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.777458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.777490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.777715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.777748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.777972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.778005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.778240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.778272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.778552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.778586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.778723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.778757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.779058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.779090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.779366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.779399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.779580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.779613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.779887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.985 [2024-11-26 19:29:55.779919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.985 qpair failed and we were unable to recover it. 00:28:32.985 [2024-11-26 19:29:55.780199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.780231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.780522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.780555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.780752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.780785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.781047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.781085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.781380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.781413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.781614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.781646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.781858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.781891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.782082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.782115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.782369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.782401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.782690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.782723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.783027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.783059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.783187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.783219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.783351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.783384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.783529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.783561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.783758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.783791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.783991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.784024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.784300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.784333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.784520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.784553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.784755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.784789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.785063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.785095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.785287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.785319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.785520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.785553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.785772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.785812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.786016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.786061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.786281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.786315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.786507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.786539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.786800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.786833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.787025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.787060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.787261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.787295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.787547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.787579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.787829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.787862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.788122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.788155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.788374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.788406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.788689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.788723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.789006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.789040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.789219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.789250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.789526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.986 [2024-11-26 19:29:55.789558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.986 qpair failed and we were unable to recover it. 00:28:32.986 [2024-11-26 19:29:55.789754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.789788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.790013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.790045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.790243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.790275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.790533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.790565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.790813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.790846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.791038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.791070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.791190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.791221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.791471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.791548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.791881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.791960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.792267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.792306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.792624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.792658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.792895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.792928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.793154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.793188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.793472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.793664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.793707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.793990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.794022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.794318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.794350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.794643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.794687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.794871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.794904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.795153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.795184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.795406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.795454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.795723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.795758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.796036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.796068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.796355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.796387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.796523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.796557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.796749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.796782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.796987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.797020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.797270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.797301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.797579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.797613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.797813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.797846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.798099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.798130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.798407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.798441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.798707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.798740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.799060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.799091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.799370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.799402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.799693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.799728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.987 qpair failed and we were unable to recover it. 00:28:32.987 [2024-11-26 19:29:55.800002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.987 [2024-11-26 19:29:55.800034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.800238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.800269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.800575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.800608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.800806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.800839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.801019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.801052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.801302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.801336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.801637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.801682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.801955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.801988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.802122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.802153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.802372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.802403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.802681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.802714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.803068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.803145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.803367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.803402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.803617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.803653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.803878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.803913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.804185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.804220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.804348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.804381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.804606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.804638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.804856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.804889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.805167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.805198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.805389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.805421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.805690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.805724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.806011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.806043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.806317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.806350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.806615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.806657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.806953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.806985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.988 [2024-11-26 19:29:55.807130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.988 [2024-11-26 19:29:55.807161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.988 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.807360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.807393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.807582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.807614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.807873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.807906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.808111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.808144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.808339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.808370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.808551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.808585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.808861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.808895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.809091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.809123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.809321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.809352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.809546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.809579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.809835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.809868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.810081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.810114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.810389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.810423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.810610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.810643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.810934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.810967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.811221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.811255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.811515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.811546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.811849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.811882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.812122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.812154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.812361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.812393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.812697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.812732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.812985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.813017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.813301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.813333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.813614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.813647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.813982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.814014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.814270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.814302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.814484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.814518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.814822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.814854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.815036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.815068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.815343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.815376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.815629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.815661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.815952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.815986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.816265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.816298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.816415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.816446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.816642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.816688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.816880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.989 [2024-11-26 19:29:55.816912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.989 qpair failed and we were unable to recover it. 00:28:32.989 [2024-11-26 19:29:55.817107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.817138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.817292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.817330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.817608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.817640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.817862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.817894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.818037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.818069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.818286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.818317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.818455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.818486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.818745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.818777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.818974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.819005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.819185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.819217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.819419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.819452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.819657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.819712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.819999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.820032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.820251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.820282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.820457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.820489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.820627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.820660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.820903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.820935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.821053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.821084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.821375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.821407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.821605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.821637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.821828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.821863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.822051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.822085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.822358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.822391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.822517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.822549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.822691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.822726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.822932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.822967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.823111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.823143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.823325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.823357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.823538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.823576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.823791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.823825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.824081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.824112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.824270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.824302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.824424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.824456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.824632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.824663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.824881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.824912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.825103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.825134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.990 qpair failed and we were unable to recover it. 00:28:32.990 [2024-11-26 19:29:55.825387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.990 [2024-11-26 19:29:55.825419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.825618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.825649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.825911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.825945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.826247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.826279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.826468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.826500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.826623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.826656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.826916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.826948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.827199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.827231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.827542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.827575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.827798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.827830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.828115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.828147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.828351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.828383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.828527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.828559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.828777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.828811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.829005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.829039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.829224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.829255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.829544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.829576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.829768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.829801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.830060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.830093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.830398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.830432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.830714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.830746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.831049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.831080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.831346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.831379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.831564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.831597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.831800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.831832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.832034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.832068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.832260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.832293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.832569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.832601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.832746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.832778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.833080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.833112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.833309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.833340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.833615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.833647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.833938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.833977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.834173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.834205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.834461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.834494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.834797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.834831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.835116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.835150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.835368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.835400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.991 qpair failed and we were unable to recover it. 00:28:32.991 [2024-11-26 19:29:55.835582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.991 [2024-11-26 19:29:55.835615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.835824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.835858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.836136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.836168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.836451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.836482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.836748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.836783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.836978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.837010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.837260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.837293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.837545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.837576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.837779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.837813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.838091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.838124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.838305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.838336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.838610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.838642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.838843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.838874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.839054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.839087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.839304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.839336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.839522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.839553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.839829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.839862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.840060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.840093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.840373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.840405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.840607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.840638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.840943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.840977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.841170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.841202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.841451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.841486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.841684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.841718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.841936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.841967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.842149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.842181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.842375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.842406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.842684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.842718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.842865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.842897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.843151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.843183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.843397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.843429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.843702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.843734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.844026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.844058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.844236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.844269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.844401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.844440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.844720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.844754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.845055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.845087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.845352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.845385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.992 qpair failed and we were unable to recover it. 00:28:32.992 [2024-11-26 19:29:55.845569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.992 [2024-11-26 19:29:55.845601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.845828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.845861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.846063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.846095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.846346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.846379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.846605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.846636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.846821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.846854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.847050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.847084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.847308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.847340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.847612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.847644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.847854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.847889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.848077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.848108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.848349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.848380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.848582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.848616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.848836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.848870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.849023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.849055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.849354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.849386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.849595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.849626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.849833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.849866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.850049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.850082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.850378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.850411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.850661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.850706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.850892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.850923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.851198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.851230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.851371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.851403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.851704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.851737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.851990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.852022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.852158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.852190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.852393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.852427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.852644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.852694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.852897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.852929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.853110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.853143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.853322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.993 [2024-11-26 19:29:55.853353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.993 qpair failed and we were unable to recover it. 00:28:32.993 [2024-11-26 19:29:55.853572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.853603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.853794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.853829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.854037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.854069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.854267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.854298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.854586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.854625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.854887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.854920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.855108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.855139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.855413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.855446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.855644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.855690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.855885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.855918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.856125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.856159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.856420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.856452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.856726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.856759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.856967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.856999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.857134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.857166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.857416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.857448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.857640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.857679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.857875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.857907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.858135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.858167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.858446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.858480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.858686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.858722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.858886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.858918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.859055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.859087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.859370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.859401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.859521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.859553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.859747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.859779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.859959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.859990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.860298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.860331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.860451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.860482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.860734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.860767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.860955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.860986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.861188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.861221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.861540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.861572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.861823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.861856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.862057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.862090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.862280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.862311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.862508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.862540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.862813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.862847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.862984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.863017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.863264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.863298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.994 qpair failed and we were unable to recover it. 00:28:32.994 [2024-11-26 19:29:55.863521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.994 [2024-11-26 19:29:55.863555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.863659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.863700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.863833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.863865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.864005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.864038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.864172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.864210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.864339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.864370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.864586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.864618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.864760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.864793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.865046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.865077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.865374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.865406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.865705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.865739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.865919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.865951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.866084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.866118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.866346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.866377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.866507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.866541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.866763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.866795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.866979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.867011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.867220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.867252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.867380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.867413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.867635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.867668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.867878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.867911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.868183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.868217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.868495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.868527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.868734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.868768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.869018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.869049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.869270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.869303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.869496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.869528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.869824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.869857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.870052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.870085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.870358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.870391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.870572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.870603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.870820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.870853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.871127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.871160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.871343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.871374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.871556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.871588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.871787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.871821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.872094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.872127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.872403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.872434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.872723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.872756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.872967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.873000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.873272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.873304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.995 [2024-11-26 19:29:55.873587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.995 [2024-11-26 19:29:55.873619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.995 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.873904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.873937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.874128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.874161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.874427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.874464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.874693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.874726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.874925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.874957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.875212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.875245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.875536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.875568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.875847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.875880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.876163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.876196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.876404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.876435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.876617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.876650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.876796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.876828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.877103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.877135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.877332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.877366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.877623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.877654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.877951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.877984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.878258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.878292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.878554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.878586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.878880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.878914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.879194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.879227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.879434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.879467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.879676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.879709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.880010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.880042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.880268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.880300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.880562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.880593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.880848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.880882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.881184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.881216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.881481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.881513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.881725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.881760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.882078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.882110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.882232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.882263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.882483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.882514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.882787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.882822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.883033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.883064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.883267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.883299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.883593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.883625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.883763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.996 [2024-11-26 19:29:55.883797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.996 qpair failed and we were unable to recover it. 00:28:32.996 [2024-11-26 19:29:55.883976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.884007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.884281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.884313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.884456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.884487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.884678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.884710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.884985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.885019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.885239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.885278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.885501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.885533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.885787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.885820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.886006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.886038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.886341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.886372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.886564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.886596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.886873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.886908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.887218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.887250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.887505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.887538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.887794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.887827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.887967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.888000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.888275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.888308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.888557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.888589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.888857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.888890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.889098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.889132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.889409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.889443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.889644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.889687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.889918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.889950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.890077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.890108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.890382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.890415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.890712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.890746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.890929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.890962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.891179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.891211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.891488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.891523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.891646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.891688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.891874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.891907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.892080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.892111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.892285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.892365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.892648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.892698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.892891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.892925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.893215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.893248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.893395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.893427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.893689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.997 [2024-11-26 19:29:55.893722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.997 qpair failed and we were unable to recover it. 00:28:32.997 [2024-11-26 19:29:55.893906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.893937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.894151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.894185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.894461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.894493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.894637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.894679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.894959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.894991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.895287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.895321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.895589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.895621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.895903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.895938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.896220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.896252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.896532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.896564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.896856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.896892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.897094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.897128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.897309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.897340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.897557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.897589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.897844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.897878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.898129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.898162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.898353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.898385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.898633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.898665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.898949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.898981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.899107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.899139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.899334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.899365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.899569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.899607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.899819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.899852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.900047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.900078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.900275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.900307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.900600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.900633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.900900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.900933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.901133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.901164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.901362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.901395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.901647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.901687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.901879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.901910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.902201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.902233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.902415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.902447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.902584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.902615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.902740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.902778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.903028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.903060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.903251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.903283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.903464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.903495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.903632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.903663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.903950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.903982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.904185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.904216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.904417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.904449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.904746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.904781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.904918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.904950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.998 [2024-11-26 19:29:55.905151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.998 [2024-11-26 19:29:55.905185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.998 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.905397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.905428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.905623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.905655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.905891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.905922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.906128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.906160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.906415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.906447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.906634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.906664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.906873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.906907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.907106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.907137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.907387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.907419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.907608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.907639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.907902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.907934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.908161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.908193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.908336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.908368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.908667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.908712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.908966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.908998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.909226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.909259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.909462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.909494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.909683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.909719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.909914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.909946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.910164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.910196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.910470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.910502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.910790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.910824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.911063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.911094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.911342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.911374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.911573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.911605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.911792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.911824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.911956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.911987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.912249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.912280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.912459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.912491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.912705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.912738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.913001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.913034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.913285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.913317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.913587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.913619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.913819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.913852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.914124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.914155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.914332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.914364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.914614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.914646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.914919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.914952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.915230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.915261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.915541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.915572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.915768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.915800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.915996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.916027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.916320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.916352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:32.999 [2024-11-26 19:29:55.916555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.999 [2024-11-26 19:29:55.916587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:32.999 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.916805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.916839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.917144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.917176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.917377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.917409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.917687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.917720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.918000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.918033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.918210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.918242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.918489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.918521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.918738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.918772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.918968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.918999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.919214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.919246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.919499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.919531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.919819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.919852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.920132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.920169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.920413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.920446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.920706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.920740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.920924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.920955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.921237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.921269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.921459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.921491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.921691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.921724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.921918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.921950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.922129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.922161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.922434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.922465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.922747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.922781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.923062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.923094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.923346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.923378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.923692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.923725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.923927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.923960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.924223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.924255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.924467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.924499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.924763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.924797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.925079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.925111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.925352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.925384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.925646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.925686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.925940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.925972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.926274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.926305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.926593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.926626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.000 [2024-11-26 19:29:55.926828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.000 [2024-11-26 19:29:55.926861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.000 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.927114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.927146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.927274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.927306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.927505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.927537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.927813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.927846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.928119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.928150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.928446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.928478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.928755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.928789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.929072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.929104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.929387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.929419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.929643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.929686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.929989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.930021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.930302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.930333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.930608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.930640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.930942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.930975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.931244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.931275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.931485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.931523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.931799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.931833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.932121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.932153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.932430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.932462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.932595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.932627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.932817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.932850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.933122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.933153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.933429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.933461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.933756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.933790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.934062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.934094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.001 qpair failed and we were unable to recover it. 00:28:33.001 [2024-11-26 19:29:55.934389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.001 [2024-11-26 19:29:55.934422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.934650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.934690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.934962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.934994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.935202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.935234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.935514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.935546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.935783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.935816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.936095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.936127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.936404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.936435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.936629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.936660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.936824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.936856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.937108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.937140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.937412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.937443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.937650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.937693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.937970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.938002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.938217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.938249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.938547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.938578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.938728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.938761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.938924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.938956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.939176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.939209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.939410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.939441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.939631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.939662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.939944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.939977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.940179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.940211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.940397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.940428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.940627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.940660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.940807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.940839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.941029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.941061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.941338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.941370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.941592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.002 [2024-11-26 19:29:55.941624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.002 qpair failed and we were unable to recover it. 00:28:33.002 [2024-11-26 19:29:55.941909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.941942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.942196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.942234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.942510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.942541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.942747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.942781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.943032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.943064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.943275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.943307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.943580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.943611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.943765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.943799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.944099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.944130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.944338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.944370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.944574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.944606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.944903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.944937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.945121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.945153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.945334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.945365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.945547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.945578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.945858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.945892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.946163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.946194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.946393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.946424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.946694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.946727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.947006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.947038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.947286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.947317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.947518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.947550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.947798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.947832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.948135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.948167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.948359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.948392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.948588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.948620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.948833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.948867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.949148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.949179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.949465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.003 [2024-11-26 19:29:55.949498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.003 qpair failed and we were unable to recover it. 00:28:33.003 [2024-11-26 19:29:55.949729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.949762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.950041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.950073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.950349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.950381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.950640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.950681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.950976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.951008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.951130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.951161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.951434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.951466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.951723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.951757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.952012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.952044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.952349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.952381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.952643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.952692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.952900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.952932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.953189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.953227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.953551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.953583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.953833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.953868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.954064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.954096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.954299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.954330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.954526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.954557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.954756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.954789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.954986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.955017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.955298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.955330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.955610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.955641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.955936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.955968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.956213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.956245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.956508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.956539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.956660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.956703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.956964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.956996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.957177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.957209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.957488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.957519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.957713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.957748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.958007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.958038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.958341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.958373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.958610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.958642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.958916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.958948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.959197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.959229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.959484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.959517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.959712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.959745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.960022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.004 [2024-11-26 19:29:55.960054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.004 qpair failed and we were unable to recover it. 00:28:33.004 [2024-11-26 19:29:55.960249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.960280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.960481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.960513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.960711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.960743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.961014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.961047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.961261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.961292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.961475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.961507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.961698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.961732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.961980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.962011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.962200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.962232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.962506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.962538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.962828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.962861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.963165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.963196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.963397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.963430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.963703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.963737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.963919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.963957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.964150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.964181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.964459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.964491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.964691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.964723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.964913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.964945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.965191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.965223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.965474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.965505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.965781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.965814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.966021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.966053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.966328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.966359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.966548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.966579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.966789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.966823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.967094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.967126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.967407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.967438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.967662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.967714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.967938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.967969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.968170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.968201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.968457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.968488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.968786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.968819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.969028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.969060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.969310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.969342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.969641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.969684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.969964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.969996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.970189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.005 [2024-11-26 19:29:55.970220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.005 qpair failed and we were unable to recover it. 00:28:33.005 [2024-11-26 19:29:55.970417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.970449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.970721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.970754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.971040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.971071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.971280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.971312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.971501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.971533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.971810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.971843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.972064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.972095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.972394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.972425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.972630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.972662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.972950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.972982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.973191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.973223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.973500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.973531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.973786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.973818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.974000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.974032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.974330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.974362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.974581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.974612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.974904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.974943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.975150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.975182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.975335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.975366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.975588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.975619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.975902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.975935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.976217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.976248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.976438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.976469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.976651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.976694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.976885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.976915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.977111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.977142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.977416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.977448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.977571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.977602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.977879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.977912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.978184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.978216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.978485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.978517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.978811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.978845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.979119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.979151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.006 [2024-11-26 19:29:55.979441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.006 [2024-11-26 19:29:55.979473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.006 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.979750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.979783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.980045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.980077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.980355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.980387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.980660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.980702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.980991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.981023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.981292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.981325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.981461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.981492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.981694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.981727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.981927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.981959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.982217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.982248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.982448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.982480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.982701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.982734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.983039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.983070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.983281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.983313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.983505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.983538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.983791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.983824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.984021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.984052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.984271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.984303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.984553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.984585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.984868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.984902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.985155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.985186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.985380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.985411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.985693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.985733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.986033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.986065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.986324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.986356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.986618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.986651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.986881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.986913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.987111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.987143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.987397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.987429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.987633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.987665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.987942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.987974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.988225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.988257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.988472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.988503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.988763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.988798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.989018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.007 [2024-11-26 19:29:55.989049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.007 qpair failed and we were unable to recover it. 00:28:33.007 [2024-11-26 19:29:55.989253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.989285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.989489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.989521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.989641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.989683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.989884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.989916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.990193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.990225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.990431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.990463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.990766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.990799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.991082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.991114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.991363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.991395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.991692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.991725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.992000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.992031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.992309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.992342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.992633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.992664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.992888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.992921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.993181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.993213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.993413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.993445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.993624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.993656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.993951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.993983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.994263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.994294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.994578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.994610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.994920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.994953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.995229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.995261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.995484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.995515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.995780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.995814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.996109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.996140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.996415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.996447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.996657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.996699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.996950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.996988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.997258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.997290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.997499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.997531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.997784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.997817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.997999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.998030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.998298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.998330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.998583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.998615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.998884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.998917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.999197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.999228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.999424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.999456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.008 [2024-11-26 19:29:55.999659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.008 [2024-11-26 19:29:55.999703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.008 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:55.999845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:55.999877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.000165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.000196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.000420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.000452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.000653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.000705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.000899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.000932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.001188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.001220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.001359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.001390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.001576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.001607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.001815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.001847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.002130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.002162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.002343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.002375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.002511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.002543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.002820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.002853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.003078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.003110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.003306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.003339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.003556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.003587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.003897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.003931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.004125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.004156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.004404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.004436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.004640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.004688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.004948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.004980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.005183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.005215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.005485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.005517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.005711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.005744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.005871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.005904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.006083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.006117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.006313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.006346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.006527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.006559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.006754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.006789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.006987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.007025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.007234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.007269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.007543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.007576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.007827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.007860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.008125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.008159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.008367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.008401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.008706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.008738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.008999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.009031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.009 [2024-11-26 19:29:56.009312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.009 [2024-11-26 19:29:56.009345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.009 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.009658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.009704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.009920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.009952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.010234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.010266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.010479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.010512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.010794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.010830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.011131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.011163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.011369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.011403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.011612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.011643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.011932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.011963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.012247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.012278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.012587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.012620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.012929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.012964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.013153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.013187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.013449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.013481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.013706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.013740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.013995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.014029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.014210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.014242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.014497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.014528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.014863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.014943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.015178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.015215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.015433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.015465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.015682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.015716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.015910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.015941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.016192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.016224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.016516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.016548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.016767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.016799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.017074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.017108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.017379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.017411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.017627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.017660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.017935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.017969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.018255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.018290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.018569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.018613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.018918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.018951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.019205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.019238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.019471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.019503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.019699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.019731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.019891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.019922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.020208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.010 [2024-11-26 19:29:56.020241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.010 qpair failed and we were unable to recover it. 00:28:33.010 [2024-11-26 19:29:56.020518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.020550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.020791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.020823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.021096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.021128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.021309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.021340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.021626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.021656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.021803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.021837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.022023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.022054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.022340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.022373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.022521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.022552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.022750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.022783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.023105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.023137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.023393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.023426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.023623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.023655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.023867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.023899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.024059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.024090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.024277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.024309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.024527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.024559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.024778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.024812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.025064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.025098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.025400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.025433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.025661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.025705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.025861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.025896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.026030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.026065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.026266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.026298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.026553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.026587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.026851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.026885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.027029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.027060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.027253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.027285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.027484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.027514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.027663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.027705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.027827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.027858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.028085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.028118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.011 [2024-11-26 19:29:56.028273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.011 [2024-11-26 19:29:56.028306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.011 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.028620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.028658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.028866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.028903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.029052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.029086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.029392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.029424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.029615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.029646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.029838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.029872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.030067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.030098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.030275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.030306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.030558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.030593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.030793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.030825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.031023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.031055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.031202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.031234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.031427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.031458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.031694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.031728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.031873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.031906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.032101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.032133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.032321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.032352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.032604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.032638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.032856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.032888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.033035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.033066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.033291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.033326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.033626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.033659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.033876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.033909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.034185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.034217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.034338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.034371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.034498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.034529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.034688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.034721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.034945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.035023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.035252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.035289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.035435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.035469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.035747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.035786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.035994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.036026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.036228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.036260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.036523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.036557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.036746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.036783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.036935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.036968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.037154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.037187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.037429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.012 [2024-11-26 19:29:56.037461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.012 qpair failed and we were unable to recover it. 00:28:33.012 [2024-11-26 19:29:56.037767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.013 [2024-11-26 19:29:56.037805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.013 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.039422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.039479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.039775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.039813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.039981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.040015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.040270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.040302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.040554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.040589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.040824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.040860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.041059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.041095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.041297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.041332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.041486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.041520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.041718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.041753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.041881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.041913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.042041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.042074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.042228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.042260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.042484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.042517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.042662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.042706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.042892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.042931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.043063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.043096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.043258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.043291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.043429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.043461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.043643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.043709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.043845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.043879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.044082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.044115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.044302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.044333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.044531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.044563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.044696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.044729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.044886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-11-26 19:29:56.044920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-11-26 19:29:56.045111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.045143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.045354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.045385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.045587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.045622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.045830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.045866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.046005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.046036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.046178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.046210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.046418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.046451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.046558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.046590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.046724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.046758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.046873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.046904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.047032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.047064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.047187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.047219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.047402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.047435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.047626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.047658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.047811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.047844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.047969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.048001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.048208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.048249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.048359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.048391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.048530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.048564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.048696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.048729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.048926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.048959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.049087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.049119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.049263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.049295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.049417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.049450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.049654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.049700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.049896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.049931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.050052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.050084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.050358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.050391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.050545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.050580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.050722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.050756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.050983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.051017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.051166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.051199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.051393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.051426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.051709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.051744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.051946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.051978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.052253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.052286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.052567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.052598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.052799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.052833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-11-26 19:29:56.052980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-11-26 19:29:56.053014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.053147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.053178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.053480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.053512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.053770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.053804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.054034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.054066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.054211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.054243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.054461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.054492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.054749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.054783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.054933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.054965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.055105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.055137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.055282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.055315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.055518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.055551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.055832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.055867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.055993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.056026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.056162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.056198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.056431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.056462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.056735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.056773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.056929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.056961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.057244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.057278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.057480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.057516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.057825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.057858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.058014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.058045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.058312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.058344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.058614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.058645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.058845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.058877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.059153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.059185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.059445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.059479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.059615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.059647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.059898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.059931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.060083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.060116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.060276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.060310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.060524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.060558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.060779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.060814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.060984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.061017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.061221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.061254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.061454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.061486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.061664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.061710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.061834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.061869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.062018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.062050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-11-26 19:29:56.062205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-11-26 19:29:56.062238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.062456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.062487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.062628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.062661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.062871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.062905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.063100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.063131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.063458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.063490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.063757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.063792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.063995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.064036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.064170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.064204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.064550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.064582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.064764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.064799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.065055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.065089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.065215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.065248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.065548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.065582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.065778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.065813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.066112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.066146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.066435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.066468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.066747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.066783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.067072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.067104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.067357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.067392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.067634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.067668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.067841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.067872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.068063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.068095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.068256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.068288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.068503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.068535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.068735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.068770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.068917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.068948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.069129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.069160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.069390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.069422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.069642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.069683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.069941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.069973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.070213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.070245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.070407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.070439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.070715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.070748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.070902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.070940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.071087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.071120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.071253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.071285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.071480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.071512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.071707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-11-26 19:29:56.071740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-11-26 19:29:56.071877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.071908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.072160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.072193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.072490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.072521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.072646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.072706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.072867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.072900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.073161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.073196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.073442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.073474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.073748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.073782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.074057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.074092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.074289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.074322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.074629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.074663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.074826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.074858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.075075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.075107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.075379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.075410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.075685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.075718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.075861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.075893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.076035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.076068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.076210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.076242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.076418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.076450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.076682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.076716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.076924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.076956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.077252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.077284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.077487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.077525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.077800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.077835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.078040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.078072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.078331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.078369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.078628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.078660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.078868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.078901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.079081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.079113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.079393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.079425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.079620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.079652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.079874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.079906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.080034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-11-26 19:29:56.080066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-11-26 19:29:56.080292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.080324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.080666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.080709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.080878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.080909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.081063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.081101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.081395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.081427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.081648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.081693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.081999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.082031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.082235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.082267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.082516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.082548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.082786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.082818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.083069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.083101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.083246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.083278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.083457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.083489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.083801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.083834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.084022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.084054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.084209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.084241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.084451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.084483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.084794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.084827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.085032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.085064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.085254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.085285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.085567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.085599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.085747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.085779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.085972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.086004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.086193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.086226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.086437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.086468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.086742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.086776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.087058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.087090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.087351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.087382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.087573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.087604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.087892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.087924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.088070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.088102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.088282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.088313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.088591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.088623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.088839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.088872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.089017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.089048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.089193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.089225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.089456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.089488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.089628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.089660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-11-26 19:29:56.089926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-11-26 19:29:56.089958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.090161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.090193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.090492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.090524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.090825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.090858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.091014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.091046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.091314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.091346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.091604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.091635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.091841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.091875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.092128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.092160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.092429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.092462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.092643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.092699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.092967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.092998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.093222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.093254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.093506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.093537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.093736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.093769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.093973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.094005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.094205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.094237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.094506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.094537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.094818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.094851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.095122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.095160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.095400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.095431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.095625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.095657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.095858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.095890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.096041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.096072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.096194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.096226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.096488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.096519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.096646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.096687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.096882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.096913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.097090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.097122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.097319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.097351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.097554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.097585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.097715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.097748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.098011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.098043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.098236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.098267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.098534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.098566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.098864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.098896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.099050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.099082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.099212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.099243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.099531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.099563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-11-26 19:29:56.099748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-11-26 19:29:56.099780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.099974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.100006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.100259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.100291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.100590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.100622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.100819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.100852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.100990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.101022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.101225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.101255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.101385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.101428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.101705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.101738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.101960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.101992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.102185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.102217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.102464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.102496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.102796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.102829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.102959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.102990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.103285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.103317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.103583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.103614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.103851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.103884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.104073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.104104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.104355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.104387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.104641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.104681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.104874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.104905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.105110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.105142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.105335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.105367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.105551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.105583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.105841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.105876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.106055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.106087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.106286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.106318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.106605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.106637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.106849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.106883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.107073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.107105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.107378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.107411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.107602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.107633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.107839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.107873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.108069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.108100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.108361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.108393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.108647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.108700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.108954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.108986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.109189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.109220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.109518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-11-26 19:29:56.109550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-11-26 19:29:56.109773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.109807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.110022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.110054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.110183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.110215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.110488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.110520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.110732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.110765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.110974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.111006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.111185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.111216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.111499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.111530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.111726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.111760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.111970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.112002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.112201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.112234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.112432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.112462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.112657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.112716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.112901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.112932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.113081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.113111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.113236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.113269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.113484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.113517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.113706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.113740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.113944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.113975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.114117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.114148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.114408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.114440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.114630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.114662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.114923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.114956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.115155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.115186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.115493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.115525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.115810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.115843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.115980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.116011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.116195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.116226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.116496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.116528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.116803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.116836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.116958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.116989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.117195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.117227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.117444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.117477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.117778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.117812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.118097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.118128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.118261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.118292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.118542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.118580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.118833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.118866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-11-26 19:29:56.119131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-11-26 19:29:56.119163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.119400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.119433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.119623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.119655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.119862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.119894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.120037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.120069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.120361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.120392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.120646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.120703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.120839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.120870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.121005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.121036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.121183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.121214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.121498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.121529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.121805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.121839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.122068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.122100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.122345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.122377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.122589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.122621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.122749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.122781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.123032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.123064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.123271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.123302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.123577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.123610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.123763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.123796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.123955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.123985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.124244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.124276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.124539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.124570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.124852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.124887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.125142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.125174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.125495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.125533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.125815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.125849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.126000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.126031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.126182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.126213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.126452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.126484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.126689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.126722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.126975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.127007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.127201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.127232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.127433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.127466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.127723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-11-26 19:29:56.127757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-11-26 19:29:56.127977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.128009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.128204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.128236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.128490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.128521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.128804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.128837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.129028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.129060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.129266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.129297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.129489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.129520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.129716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.129750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.130064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.130095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.130239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.130270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.130542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.130574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.130834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.130867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.131017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.131049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.131231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.131263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.131379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.131411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.131609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.131641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.131860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.131892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.132102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.132139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.132447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.132479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.132732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.132764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.133031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.133063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.133209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.133241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.133510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.133541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.133799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.133833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.133990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.134022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.134156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.134188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.134466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.134497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.134773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.134806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.134948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.134980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.135198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.135230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.135432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.135464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.135688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.135722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.135990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.136022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.136223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.136255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.136501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.136533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.136647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.136687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.136901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.136932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.137192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.137224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-11-26 19:29:56.137372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-11-26 19:29:56.137404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.137700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.137735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.138038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.138071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.138347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.138377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.138501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.138533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.138683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.138716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.138863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.138895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.139159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.139191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.139492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.139524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.139836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.139869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.140145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.140176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.140382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.140413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.140690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.140724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.140876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.140907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.141185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.141218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.141515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.141547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.141757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.141791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.141950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.141982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.142118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.142150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.142344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.142375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.142560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.142590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.142819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.142851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.143119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.143151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.143290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.143321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.143515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.143548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.143750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.143783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.144057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.144089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.144285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.144317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.144498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.144529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.144667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.144710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.144859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.144889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.145082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.145116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.145317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.145349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.145543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.145576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.145768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.145802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.145942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.145972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.146124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.146154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.146288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.146320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.146527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-11-26 19:29:56.146559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-11-26 19:29:56.146800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.146834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.146983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.147013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.147150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.147181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.147319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.147351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.147486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.147516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.147769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.147803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.147995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.148026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.148223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.148254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.148514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.148550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.148704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.148738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.148872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.148904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.149042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.149072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.149324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.149357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.149640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.149683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.149913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.149945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.150089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.150121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.150431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.150462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.150600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.150631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.150799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.150833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.150964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.150995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.151249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.151280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.151541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.151574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.151783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.151816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.151964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.151996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.152201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.152233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.152392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.152424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.152739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.152772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.152989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.153022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.153167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.153198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.153423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.153456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.153652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.153697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.153843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.153874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.154088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.154120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.154329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.154359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.154539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.154570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.154753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.154792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.154925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.154956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.155106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.155136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-11-26 19:29:56.155422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-11-26 19:29:56.155454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.155653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.155715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.155869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.155899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.156080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.156111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.156335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.156366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.156546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.156579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.156801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.156835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.157037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.157068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.157384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.157416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.157607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.157638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.157831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.157864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.158076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.158108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.158381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.158412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.158698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.158731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.159074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.159106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.159315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.159346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.159575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.159608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.159936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.159972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.160187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.160219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.160382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.160415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.160600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.160633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.160858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.160892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.161143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.161175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.161488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.161520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.161747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.161781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.161993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.162024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.162274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.162305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.162596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.162629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.162909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.162942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.163221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.163253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.163378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.163409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.163692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.163726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.163948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.163980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.164207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.164238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.164539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.164571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-11-26 19:29:56.164795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-11-26 19:29:56.164828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.165021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.165053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.165193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.165224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.165431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.165463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.165689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.165722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.166000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.166032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.166213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.166245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.166380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.166411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.166604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.166636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.166813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.166846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.167041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.167072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.167314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.167345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.167532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.167562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.167769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.167802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.167996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.168027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.168156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.168188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.168524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.168556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.168893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.168926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.169064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.169096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.169313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.169344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.169610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.169641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.169801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.169833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.170038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.170070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.170264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.170296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.170547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.170579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.170808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.170841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.171018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.171049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.171240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.171271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.171463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.171494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.171694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.171727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.171930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.171967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.172189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.172221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.172424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.172456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.172733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.172767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.172913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.172944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.173138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.173170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.173471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.173503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.173829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.173863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.174045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-11-26 19:29:56.174078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-11-26 19:29:56.174389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.174421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.176074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.176134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.176437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.176471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.176733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.176768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.176986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.177017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.177240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.177272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.177528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.177561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.177753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.177789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.177923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.177955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.178108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.178140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.178365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.178396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.178537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.178568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.178754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.178789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.178941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.178972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.179188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.179220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.179437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.179471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.179663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.179720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.179916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.179948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.180141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.180181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.180533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.180566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.182528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.182592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.182867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.182904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.183058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.183090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.183379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.183410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.183734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.183768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.183972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.184004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.184198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.184230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.184424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.184456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.184597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.184629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.184835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.184868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.185094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.185127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.185361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.185393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.185603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.185635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.185812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.185846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.185986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.186017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.186167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.186198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.186500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.186538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.186774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.186809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.186947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-11-26 19:29:56.186979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-11-26 19:29:56.187232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.187264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.187516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.187547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.187764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.187798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.188060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.188091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.188299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.188332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.188530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.188563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.188860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.188901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.189059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.189091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.189249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.189281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.189469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.189501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.189633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.189665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.189822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.189855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.189995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.190027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.190227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.190259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.190488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.190521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.190734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.190768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.190921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.190953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.191082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.191117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.191262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.191295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.191494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.191527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.191844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.191922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.192196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.192273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.192573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.192610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.192808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.192841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.193049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.193082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.193336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.193367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.193662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.193706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.193839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.193871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.194012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.194042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.194248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.194280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.194422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.194455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.194644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.194685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.194891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.194923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.195121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.195162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.195280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.195311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.195498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.195529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.195667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.195729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.195950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.195983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-11-26 19:29:56.196258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-11-26 19:29:56.196290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.196491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.196523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.196723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.196756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.196950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.196982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.197181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.197212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.197471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.197502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.197727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.197760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.197896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.197927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.198074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.198104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.198295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.198326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.198553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.198585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.198789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.198822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.199077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.199109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.199306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.199338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.199563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.199595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.199802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.199836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.200035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.200066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.200266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.200297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.200519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.200550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.200811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.200844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.200973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.201004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.201201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.201231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.201439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.201470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.201720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.201753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.201891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.201922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.202050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.202081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.202280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.202313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.202568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.202600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.202796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.202829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.202972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.203003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.203206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.203237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.203443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.203474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.203750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.203782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.203920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.203951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.204175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.204207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.204424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.204461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.204590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.204621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.204841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.204873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-11-26 19:29:56.205025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-11-26 19:29:56.205055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.205213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.205244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.205445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.205476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.205733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.205766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.205973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.206005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.206205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.206237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.206487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.206518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.206711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.206743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.206968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.207000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.207210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.207240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.207389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.207421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.207693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.207726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.207877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.207908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.208135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.208167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.208501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.208534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.208756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.208789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.208997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.209029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.209153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.209184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.209326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.209357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.209624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.209655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.209774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.209806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.209988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.210019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.210272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.210304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.210587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.210619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.210866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.210900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.211105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.211137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.211345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.211375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.211524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.211555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.211767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.211801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.211998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.212030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.212216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.212249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.212513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.212545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.212724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.212758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.212961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.212993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-11-26 19:29:56.213119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-11-26 19:29:56.213152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.213375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.213406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.213628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.213659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.213827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.213864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.214066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.214097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.214229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.214259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.214462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.214495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.214697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.214730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.214859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.214890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.215046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.215078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.215274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.215304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.215600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.215631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.215813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.215846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.215994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.216026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.216230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.216262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.216388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.216419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.216607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.216639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.216868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.216900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.217107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.217139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.217269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.217301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.217486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.217517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.217655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.217698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.217921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.217952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.218155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.218187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.218374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.218406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.218600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.218630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.218782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.218816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.218958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.218989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.219180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.219212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.219344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.219376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.219578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.219657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.219826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.219862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.220071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.220103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.220253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.220286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.220405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.220437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.220553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.220585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.220707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.220740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.220870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.220902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-11-26 19:29:56.221096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-11-26 19:29:56.221128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.221325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.221357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.221474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.221506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.221622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.221653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.221775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.221808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.222025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.222056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.222184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.222215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.222344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.222376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.222495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.222527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.222718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.222753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.222879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.222910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.223102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.223135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.223275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.223308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.223495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.223526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.223720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.223753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.223880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.223911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.224102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.224134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.224366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.224400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.224523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.224554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.224738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.224777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.224905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.224937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.225136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.225168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.225350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.225383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.225503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.225534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.225741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.225775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.225962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.225993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.226134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.226166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.226389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.226422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.226650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.226714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.226839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.226870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.227004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.227035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.227177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.227209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.227393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.227425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.227702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.227736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.227923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.227955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.228229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.228261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.228457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.228489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.228633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.228665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.228908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-11-26 19:29:56.228939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-11-26 19:29:56.229148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.229179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.229300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.229331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.229444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.229475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.229655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.229706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.229840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.229870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.230050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.230081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.230333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.230365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.230479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.230522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.230726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.230760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.230944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.230977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.231168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.231200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.231376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.231406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.231516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.231547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.231653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.231700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.231894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.231925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.232122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.232154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.232294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.232324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.232502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.232534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.232652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.232697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.232828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.232859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.232979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.233012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.233207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.233238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.233373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.233403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.233514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.233546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.233765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.233798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.233983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.234014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.234131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.234162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.234432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.234465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.234641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.234686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.234829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.234861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.235041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.235073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.235193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.235224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.235400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.235432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.235570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.235602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.235717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.235756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.236032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.236064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.236192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.236222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.236398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-11-26 19:29:56.236430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-11-26 19:29:56.236623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.236655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.236780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.236811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.237035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.237067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.237193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.237223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.237351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.237384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.237686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.237720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.237862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.237894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.238116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.238150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.238354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.238386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.238565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.238596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.238717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.238749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.238884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.238914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.239096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.239126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.239306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.239337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.239461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.239492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.239741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.239774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.239902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.239932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.240188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.240220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.240329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.240359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.240476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.240508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.240649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.240690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.240872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.240902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.241088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.241120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.241246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.241277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.241470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.241503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.241723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.241756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.241941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.241972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.242101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.242131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.242263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.242296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.242510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.242543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.242746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.242803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.242943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.242975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.243167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.243198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.243402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.243434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.243547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.243579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.243823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.243857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-11-26 19:29:56.244127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-11-26 19:29:56.244157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.244288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.244326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.244458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.244490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.244692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.244726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.245003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.245036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.245216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.245247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.245369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.245402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.245586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.245617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.245828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.245860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.246134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.246165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.246391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.246424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.246545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.246575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.246712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.246745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.246940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.246972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.247098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.247129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.247327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.247358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.247472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.247504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.247628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.247659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.247971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.248004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.248225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.248257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.248375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.248406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.248527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.248557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.248700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.248734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.248857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.248887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.249093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.249126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.249313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.249345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.249473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.249505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.249636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.249667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.249863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.249901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.250081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.250112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.250370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.250401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.250583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.250614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.250750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.250782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.250977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.251008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.251122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.251152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.251262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-11-26 19:29:56.251292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-11-26 19:29:56.251411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.251441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.251562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.251593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.251777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.251809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.251930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.251960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.252147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.252180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.252308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.252341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.252544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.252575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.252755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.252787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.252969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.252999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.253110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.253142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.253342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.253375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.253496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.253526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.253781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.253815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.253932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.253963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.254140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.254172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.254370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.254400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.254533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.254563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.254687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.254720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.254934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.254966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.255222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.255259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.255369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.255400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.255614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.255646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.255851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.255881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.256032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.256064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.256187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.256220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.256360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.256391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.256568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.256601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.256719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.256753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.256874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.256905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.257183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.257216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.257436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.257468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.257656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.257699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.257860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.257891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.258039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.258071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.258245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.258275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.258474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.258505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.258739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.258772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.258916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.320 [2024-11-26 19:29:56.258948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.320 qpair failed and we were unable to recover it. 00:28:33.320 [2024-11-26 19:29:56.259074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.259106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.259296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.259327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.259462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.259492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.259598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.259629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.259825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.259858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.260034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.260064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.260195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.260226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.260348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.260379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.260513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.260545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.260746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.260781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.260906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.260939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.261142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.261174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.261304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.261335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.261513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.261545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.261818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.261851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.262040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.262072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.262262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.262294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.262406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.262437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.262570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.262603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.262887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.262920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.263118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.263149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.263389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.263421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.263549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.263580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.263769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.263802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.263928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.263961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.264067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.264099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.264234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.264266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.264385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.264418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.264533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.264565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.264748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.264782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.264913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.264946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.265127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.265157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.265372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.265403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.265527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.265557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.265686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.265720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.265829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.265860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.321 [2024-11-26 19:29:56.266067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.321 [2024-11-26 19:29:56.266099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.321 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.266342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.266374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.266546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.266578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.266698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.266733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.266843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.266874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.266982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.267013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.267201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.267232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.267345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.267376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.267607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.267639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.267833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.267866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.268009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.268042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.268230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.268261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.268472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.268502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.268622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.268659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.268855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.268888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.269081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.269113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.269297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.269330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.269448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.269480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.269607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.269638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.269778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.269809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.270021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.270054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.270233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.270264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.270376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.270408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.270529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.270561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.270779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.270811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.271009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.271042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.271153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.271186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.271380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.271411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.271532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.271565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.271764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.271797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.272003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.272034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.272229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.272260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.272383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.272415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.272536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.272568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.272757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.272790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.272921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.272952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.322 [2024-11-26 19:29:56.273070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.322 [2024-11-26 19:29:56.273102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.322 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.273207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.273239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.273346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.273378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.273492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.273522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.273763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.273802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.273925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.273956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.274081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.274112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.274230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.274261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.274475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.274508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.274610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.274642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.274884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.274959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.275110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.275145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.275323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.275357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.275465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.275497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.275604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.275636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.275884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.275918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.276100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.276131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.276258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.276289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.276421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.276452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.276568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.276598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.276718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.276750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.276861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.276894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.277007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.277039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.277217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.277248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.277363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.277394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.277583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.277614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.277751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.277782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.277962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.277995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.278238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.278270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.278495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.278527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.278718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.278751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.323 [2024-11-26 19:29:56.278910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.323 [2024-11-26 19:29:56.278946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.323 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.279120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.279151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.279278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.279310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.279503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.279535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.279687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.279721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.279895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.279926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.280039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.280070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.280253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.280285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.280399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.280430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.280543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.280574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.280746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.280780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.280960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.280991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.281097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.281128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.281324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.281356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.281488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.281519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.281638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.281681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.281791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.281822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.282016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.282047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.282184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.282216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.282465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.282496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.282603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.282633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.282769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.282802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.282979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.283010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.283206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.283238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.283370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.283401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.283581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.283614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.283732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.283763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.283882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.283919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.284040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.284071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.284251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.284284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.284475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.284505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.284646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.284692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.284874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.284904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.285010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.285040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.285155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.285186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.285295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.285326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.324 qpair failed and we were unable to recover it. 00:28:33.324 [2024-11-26 19:29:56.285435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.324 [2024-11-26 19:29:56.285466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.285585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.285616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.285747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.285779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.285971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.286002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.286128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.286159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.286354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.286387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.286507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.286539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.286656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.286700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.286896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.286927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.287047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.287079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.287259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.287291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.287464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.287493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.287598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.287629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.287746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.287778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.287910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.287941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.288065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.288097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.288272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.288305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.288432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.288462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.288634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.288706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.288827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.288859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.289124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.289155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.289362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.289394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.289505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.289535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.289708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.289740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.289989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.290021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.290139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.290171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.290342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.290373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.290558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.290588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.290777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.290808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.290931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.290964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.291230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.291261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.291446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.291478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.291611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.291642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.291850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.291888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.292023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.292056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.292181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.292212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.325 [2024-11-26 19:29:56.292328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-11-26 19:29:56.292360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.325 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.292604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.292635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.292767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.292799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.293043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.293074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.293190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.293220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.293353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.293382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.293559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.293590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.293718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.293749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.293990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.294021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.294215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.294252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.294458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.294489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.294606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.294638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.294765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.294800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.294923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.294956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.295137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.295168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.295352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.295385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.295652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.295696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.295879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.295910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.296162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.296194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.296304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.296336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.296506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.296537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.296640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.296693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.296822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.296855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.296988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.297019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.297196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.297226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.297327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.297358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.297554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.297585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.297768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.297802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.297984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.298013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.298125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.298156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.298367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.298399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.298508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.298539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.298724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.298757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.298945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.298978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.299162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.299193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.299364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.299395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.299567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-11-26 19:29:56.299608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.326 qpair failed and we were unable to recover it. 00:28:33.326 [2024-11-26 19:29:56.299804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.299837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.299962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.299994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.300182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.300213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.300345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.300376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.300558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.300590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.300785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.300819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.301064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.301096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.301281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.301312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.301432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.301464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.301653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.301696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.301831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.301863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.302060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.302091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.302549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.302585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.302796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.302832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.302955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.302988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.303164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.303196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.303322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.303352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.303544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.303576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.303698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.303732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.303844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.303876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.304049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.304080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.304290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.304321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.304513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.304544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.304808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.304841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.304981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.305014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.305125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.305156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.305415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.305446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.305571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.305602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.305773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.305807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.306017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.306048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.306241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.306272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.306440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.306470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.306679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.306712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.306895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.306926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.307050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.307081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.307250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.307281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.307490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.327 [2024-11-26 19:29:56.307521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.327 qpair failed and we were unable to recover it. 00:28:33.327 [2024-11-26 19:29:56.307628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.307660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.307854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.307884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.308080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.308113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.308286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.308318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.308434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.308466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.308572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.308603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.308848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.308881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.309007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.309040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.309245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.309276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.309468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.309499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.309719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.309752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.310000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.310030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.310244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.310274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.310466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.310496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.310624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.310654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.310788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.310819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.311061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.311091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.311270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.311302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.311488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.311519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.311749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.311781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.311956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.311987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.312112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.312143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.312263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.312293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.312468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.312500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.312695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.312728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.312902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.312933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.313069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.313099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.313212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.313244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.313417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.313447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.313621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.313651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.313783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.328 [2024-11-26 19:29:56.313821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.328 qpair failed and we were unable to recover it. 00:28:33.328 [2024-11-26 19:29:56.314036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.314067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.314235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.314267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.314384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.314414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.314624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.314656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.314902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.314935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.315112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.315142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.315275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.315308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.315519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.315551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.315680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.315711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.315903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.315935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.316144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.316175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.316348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.316379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.316551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.316583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.316715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.316747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.317032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.317064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.317178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.317209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.317382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.317413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.317538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.317569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.317763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.317798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.317996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.318027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.318156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.318188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.318326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.318357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.318464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.318495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.318687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.318721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.318973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.319004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.319200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.319231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.319409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.319445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.319640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.319678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.319858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.319890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.320129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.320161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.320444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.320475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.320685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.320719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.320857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.320889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.321014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.321044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.321151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.321182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.321419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.321450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.329 [2024-11-26 19:29:56.321563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.329 [2024-11-26 19:29:56.321594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.329 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.321856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.321890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.322073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.322104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.322221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.322252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.322448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.322479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.322697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.322730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.322835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.322867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.322978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.323011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.323125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.323156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.323330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.323362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.323539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.323569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.323699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.323733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.323905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.323936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.324188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.324220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.324394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.324426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.324602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.324634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.324821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.324854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.325096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.325134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.325339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.325370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.325574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.325607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.325792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.325824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.326075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.326107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.326283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.326313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.326527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.326557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.326738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.326770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.326953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.326986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.327200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.327231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.327349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.327380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.327554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.327586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.327793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.327825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 [2024-11-26 19:29:56.327943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.330 [2024-11-26 19:29:56.327973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.330 qpair failed and we were unable to recover it. 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Read completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.330 Write completed with error (sct=0, sc=8) 00:28:33.330 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Write completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Read completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 Write completed with error (sct=0, sc=8) 00:28:33.331 starting I/O failed 00:28:33.331 [2024-11-26 19:29:56.328644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.331 [2024-11-26 19:29:56.328914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.328986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.329206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.329241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.329470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.329503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.329772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.329807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.330007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.330037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.330221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.330252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.330423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.330465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.330695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.330727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.331029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.331060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.331275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.331307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.331522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.331554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.331724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.331756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.332036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.332068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.332184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.332214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.332392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.332424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.332607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.332638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.332780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.332812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.332953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.332983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.333161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.333193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.333435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.333466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.333654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.333698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.333822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.333854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.334031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.334062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.334232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.334263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.334456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.334486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.334599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.334629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.334886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.334919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.335191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.335223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.335411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.335442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.335644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.335686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.335921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.335952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.336151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.331 [2024-11-26 19:29:56.336182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.331 qpair failed and we were unable to recover it. 00:28:33.331 [2024-11-26 19:29:56.336361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.336392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.336653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.336749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.336893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.336928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.337107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.337139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.337311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.337343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.337626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.337658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.337821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.337852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.338024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.338055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.338233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.338264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.338494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.338526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.338700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.338734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.338848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.338880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.339054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.339102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.339223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.339253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.339481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.339522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.339727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.339759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.339997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.340028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.340284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.340315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.340485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.340516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.340621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.340654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.340784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.340816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.340985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.341017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.341199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.341231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.341439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.341469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.341639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.341681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.341866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.341897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.342015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.342046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.342222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.342253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.342436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.342468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.342596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.342627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.342745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.342779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.342895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.342927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.343131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.343162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.343365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.343396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.343520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.343552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.343737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.332 [2024-11-26 19:29:56.343770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.332 qpair failed and we were unable to recover it. 00:28:33.332 [2024-11-26 19:29:56.343953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.343986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.344161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.344193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.344396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.344427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.344554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.344586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.344887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.344920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.345044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.345082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.345216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.345248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.345374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.345405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.345577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.345609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.345806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.345840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.345959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.345990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.346202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.346233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.346419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.346452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.346631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.346661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.346789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.346821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.347005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.347037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.347158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.347189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.347450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.347481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.347656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.347717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.347918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.347951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.348065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.348097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.348271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.348303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.348436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.348466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.348641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.348688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.348958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.348989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.349250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.349281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.349400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.349431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.349558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.349589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.349699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.349734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.349973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.350004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.350176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.333 [2024-11-26 19:29:56.350207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.333 qpair failed and we were unable to recover it. 00:28:33.333 [2024-11-26 19:29:56.350319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.350351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.350564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.350595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.350712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.350744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.350868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.350900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.351167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.351199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.351442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.351474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.351655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.351694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.351815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.351847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.351968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.351999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.352111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.352142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.352316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.352348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.352519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.352551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.352666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.352706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.352953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.352985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.353222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.353259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.353439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.353471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.353741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.353774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.353890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.353922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.354116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.354147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.354281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.354312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.354548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.354580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.354708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.354740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.354863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.354894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.355078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.355109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.355280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.355310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.355489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.355520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.355760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.355793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.355966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.355997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.356129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.356160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.356352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.356384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.356552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.356584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.356770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.356803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.357039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.357070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.357278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.357311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.334 [2024-11-26 19:29:56.357582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.334 [2024-11-26 19:29:56.357613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.334 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.357835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.357868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.358049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.358081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.358289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.358320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.358561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.358592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.358771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.358805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.358974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.359005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.359190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.359222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.359464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.359495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.359611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.359643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.359837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.359870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.360199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.360232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.360360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.360390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.360491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.360523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.360704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.360737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.360974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.361004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.361131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.361163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.361336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.361367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.361497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.361530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.361718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.361751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.361858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.361896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.362076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.362109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.362296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.362326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.362527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.362559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.362666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.362710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.362829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.362861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.363105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.363136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.363342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.363374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.363571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.363602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.363857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.363891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.364100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.364132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.364318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.364349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.364529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.364559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.364741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.364774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.364959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.364991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.335 qpair failed and we were unable to recover it. 00:28:33.335 [2024-11-26 19:29:56.365095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.335 [2024-11-26 19:29:56.365125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.365249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.365281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.365460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.365492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.365757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.365790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.366030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.366061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.366268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.366300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.366481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.366512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.366643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.366684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.366809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.366840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.367013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.367045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.367163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.367194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.367433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.367465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.367585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.367618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.367808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.367841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.367970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.368000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.368135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.368167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.368370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.368402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.368538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.368570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.368811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.368845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.368954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.368985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.369171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.369203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.369480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.369511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.369715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.369748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.369850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.369880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.370072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.370104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.370227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.370265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.370444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.370476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.370592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.370623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.370798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.370832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.371027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.371058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.371174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.371206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.371383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.371416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.371530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.371561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.371713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.371746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.371856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.371887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.372058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.372091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.372207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.372238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.372414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.372446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.372653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.336 [2024-11-26 19:29:56.372698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.336 qpair failed and we were unable to recover it. 00:28:33.336 [2024-11-26 19:29:56.372890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.372922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.373111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.373142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.373271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.373303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.373480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.373512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.373614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.373645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.373859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.373890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.374005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.374037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.374208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.374240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.374412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.374443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.374620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.374653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.374798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.374831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.375026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.375059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.375245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.375276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.375420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.375452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.375655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.375697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.375938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.375969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.376158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.376190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.376315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.376347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.376454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.376485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.376700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.376733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.376976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.377008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.377120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.377151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.377267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.377297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.377407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.377439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.377557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.377588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.377703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.377736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.377934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.377971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.378087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.378119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.378314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.378345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.378528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.378559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.378745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.378778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.378966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.378997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.379173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.379204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.379458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.379489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.379679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.379711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.379823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.379855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.380102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.380135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.380326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.380356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.380534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.380566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.380752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.337 [2024-11-26 19:29:56.380785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.337 qpair failed and we were unable to recover it. 00:28:33.337 [2024-11-26 19:29:56.380905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.380937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.381146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.381177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.381343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.381375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.381544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.381575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.381725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.381758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.381870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.381901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.382138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.382170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.382281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.382312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.382482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.382514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.382637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.382668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.382923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.382954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.383160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.383191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.383374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.383406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.383601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.383633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.383828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.383860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.384033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.384064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.384240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.384272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.338 [2024-11-26 19:29:56.384451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.338 [2024-11-26 19:29:56.384481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.338 qpair failed and we were unable to recover it. 00:28:33.620 [2024-11-26 19:29:56.384726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-11-26 19:29:56.384759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-11-26 19:29:56.385028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-11-26 19:29:56.385059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-11-26 19:29:56.385179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-11-26 19:29:56.385210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-11-26 19:29:56.385419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-11-26 19:29:56.385450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-11-26 19:29:56.385641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-11-26 19:29:56.385682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.385814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.385846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.385974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.386006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.386202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.386233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.386423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.386462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.386662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.386701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.386947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.386980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.387095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.387126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.387324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.387355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.387476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.387507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.387718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.387752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.387961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.387992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.388253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.388284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.388416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.388449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.388628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.388659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.388805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.388836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.389007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.389039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.389280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.389311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.389507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.389538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.389641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.389683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.389812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.389845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.389977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.390007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.390186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.390217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.390327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.390359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.390553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.390585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.390802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.390836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.390950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.390981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.391101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.391132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.391261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.391292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.391551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.391583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.391718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.391751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.391889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.391921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.392101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.392133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.392238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.392269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.392466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.621 [2024-11-26 19:29:56.392499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-11-26 19:29:56.392629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.392660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.392859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.392891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.393147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.393179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.393317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.393349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.393521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.393555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.393667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.393708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.393970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.394001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.394121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.394152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.394342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.394372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.394541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.394578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.394701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.394735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.394847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.394878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.395058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.395093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.395218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.395250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.395491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.395523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.395696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.395728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.395903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.395935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.396109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.396141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.396351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.396383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.396494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.396525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.396763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.396796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.396979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.397012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.397131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.397163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.397287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.397319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.397496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.397529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.397701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.397733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.397923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.397955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.398191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.398224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.398344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.398375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.398566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.398598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.398807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.398841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.399070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.399102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.399284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.399317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.399577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.399609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.399828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.399861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.399997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.400029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.622 [2024-11-26 19:29:56.400190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.622 [2024-11-26 19:29:56.400261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.400420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.400454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.400580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.400612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.400797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.400829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.401045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.401076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.401180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.401211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.401338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.401367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.401650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.401696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.401896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.401926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.402130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.402160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.402288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.402318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.402437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.402466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.402636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.402665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.402867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.402907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.403085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.403117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.403312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.403342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.403465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.403495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.403609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.403640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.403788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.403825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.403948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.403979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.404110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.404142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.404262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.404294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.404431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.404463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.404715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.404748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.405017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.405048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.405165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.405198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.405388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.405421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.405602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.405634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.405832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.405865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.406050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.406081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.406203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.406236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.406426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.406459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.406655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.406697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.406806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.406837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.407022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.407055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.407224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.407255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.407368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.623 [2024-11-26 19:29:56.407399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.623 qpair failed and we were unable to recover it. 00:28:33.623 [2024-11-26 19:29:56.407537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.407571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.407810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.407844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.407971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.408002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.408181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.408252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.408384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.408419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.408598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.408632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.408835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.408867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.408997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.409028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.409147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.409178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.409355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.409387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.409555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.409586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.409708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.409740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.409861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.409892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.410061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.410091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.410197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.410226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.410464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.410497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.410740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.410778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.410902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.410932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.411118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.411149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.411434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.411464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.411638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.411676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.411860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.411891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.412063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.412093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.412351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.412381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.412586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.412615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.412815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.412845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.413100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.413130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.413235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.413267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.413477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.413507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.413691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.413724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.413905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.413936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.414127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.414157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.414255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.414285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.414461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.414491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.414665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.414707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.414819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.414849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.415022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.415053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.415155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.624 [2024-11-26 19:29:56.415185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.624 qpair failed and we were unable to recover it. 00:28:33.624 [2024-11-26 19:29:56.415303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.415334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.415535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.415565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.415745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.415776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.415891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.415921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.416096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.416127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.416258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.416287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.416526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.416558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.416741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.416772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.416904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.416933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.417041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.417071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.417197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.417225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.417399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.417429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.417603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.417635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.417944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.417976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.418104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.418135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.418371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.418402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.418538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.418568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.418751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.418782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.418899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.418936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.419108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.419139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.419336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.419365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.419500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.419530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.419646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.419687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.419796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.419826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.419997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.420027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.420274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.420304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.420419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.420450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.420623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.420654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.420781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.420813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.421048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.421079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.421217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.421248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.421360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.625 [2024-11-26 19:29:56.421390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.625 qpair failed and we were unable to recover it. 00:28:33.625 [2024-11-26 19:29:56.421516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.421547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.421719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.421753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.421939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.421969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.422198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.422227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.422414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.422444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.422551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.422580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.422774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.422809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.422944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.422974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.423199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.423229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.423399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.423429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.423623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.423654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.423836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.423867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.424058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.424090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.424323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.424363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.424628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.424715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.424983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.425020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.425142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.425174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.425361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.425393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.425572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.425604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.425732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.425765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.425865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.425897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.426010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.426041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.426312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.426343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.426618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.426650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.426904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.426936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.427139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.427172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.427290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.427331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.427505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.427536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.427638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.427681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.427851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.427884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.428060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.428091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.428225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.428258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.428495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.428526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.428706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.428741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.428924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.428957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.429139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.429171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.429357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.429389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.626 [2024-11-26 19:29:56.429594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.626 [2024-11-26 19:29:56.429628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.626 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.429824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.429858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.430074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.430106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.430287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.430319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.430430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.430462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.430641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.430685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.430952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.430984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.431171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.431204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.431375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.431407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.431526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.431559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.431745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.431778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.431897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.431928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.432107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.432140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.432377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.432410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.432519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.432550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.432657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.432703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.432880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.432917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.433123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.433155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.433346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.433378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.433654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.433706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.433827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.433859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.434029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.434062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.434242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.434275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.434392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.434424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.434535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.434567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.434842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.434877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.435001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.435033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.435149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.435181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.435296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.435329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.435436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.435467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.435731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.435765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.435886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.435918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.436047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.436080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.436209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.436241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.436417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.436449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.436624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.436657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.436837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.436870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.437051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.627 [2024-11-26 19:29:56.437083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.627 qpair failed and we were unable to recover it. 00:28:33.627 [2024-11-26 19:29:56.437187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.437219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.437463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.437495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.437612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.437644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.437842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.437875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.437999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.438031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.438150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.438182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.438360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.438393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.438569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.438601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.438773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.438807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.438994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.439026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.439196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.439228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.439416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.439448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.439621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.439653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.439838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.439869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.440044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.440077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.440193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.440225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.440419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.440451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.440566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.440597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.440778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.440818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.441034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.441066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.441185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.441217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.441405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.441438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.441613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.441644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.441847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.441880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.442071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.442105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.442383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.442415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.442542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.442574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.442808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.442843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.443042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.443074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.443256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.443288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.443478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.443509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.443745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.443778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.443990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.444022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.628 qpair failed and we were unable to recover it. 00:28:33.628 [2024-11-26 19:29:56.444227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.628 [2024-11-26 19:29:56.444258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.444458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.444490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.444752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.444786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.444956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.444988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.445189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.445221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.445486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.445519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.445704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.445736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.445960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.445992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.446114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.446146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.446258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.446289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.446472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.446506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.446683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.446718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.446841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.446873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.447078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.447111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.447357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.447390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.447515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.447548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.447731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.447765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.447881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.447914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.448139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.448170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.448288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.448320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.448505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.448537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.448737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.448770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.448948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.448981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.449087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.449118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.449373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.449404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.449593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.449631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.449930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.449964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.450096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.450128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.450310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.450341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.450456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.450487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.450606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.450638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.450907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.450943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.451210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.451242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.451347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.451379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.451640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.451703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.451955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.451987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.452164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.452196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.452311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.452344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.629 [2024-11-26 19:29:56.452527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.629 [2024-11-26 19:29:56.452560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.629 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.452831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.452866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.452994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.453027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.453301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.453333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.453519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.453551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.453682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.453716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.453834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.453865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.454071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.454106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.454212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.454244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.454515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.454547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.454660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.454701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.454884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.454917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.455103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.455135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.455379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.455410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.455547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.455580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.455836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.455870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.455993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.456026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.456135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.456168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.456292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.456324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.456499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.456531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.456781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.456821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.456938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.456969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.457150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.457183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.457458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.457491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.457622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.457655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.457778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.457811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.457985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.458017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.458212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.458250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.458366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.458405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.458538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.458570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.458756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.458790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.458904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.458936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.459104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.459137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.459329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.630 [2024-11-26 19:29:56.459361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.630 qpair failed and we were unable to recover it. 00:28:33.630 [2024-11-26 19:29:56.459532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.459564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.459760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.459794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.459915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.459947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.460067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.460100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.460295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.460327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.460445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.460477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.460654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.460734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.460895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.460927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.461110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.461141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.461308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.461341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.461547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.461579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.461768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.461802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.461976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.462009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.462274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.462307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.462563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.462595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.462717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.462751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.462955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.462988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.463187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.463220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.463389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.463422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.463601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.463633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.463899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.463933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.464125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.464158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.464265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.464296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.464476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.464509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.464701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.464734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.464868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.464900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.465080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.465112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.465282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.465315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.465436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.465468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.465640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.465683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.465855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.465888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.466015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.466047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.466221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.466253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.466524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.466563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.466757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.466791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.466984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.467015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.467197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.467229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.467401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.467433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.631 qpair failed and we were unable to recover it. 00:28:33.631 [2024-11-26 19:29:56.467540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.631 [2024-11-26 19:29:56.467572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.467855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.467889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.468073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.468105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.468289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.468321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.468525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.468557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.468766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.468800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.468986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.469018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.469201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.469233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.469500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.469533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.469730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.469764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.469968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.470000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.470215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.470248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.470384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.470417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.470623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.470656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.470850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.470882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.471122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.471156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.471403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.471435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.471573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.471606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.471721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.471754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.471960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.471992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.472119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.472151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.472274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.472305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.472491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.472523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.472694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.472727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.472844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.472875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.473064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.473096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.473227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.473259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.473438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.473471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.473592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.473623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.473766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.473799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.473914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.473947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.474083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.474114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.474375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.474407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.474588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.474619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.474832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.474866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.475048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.475085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.475281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.632 [2024-11-26 19:29:56.475313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.632 qpair failed and we were unable to recover it. 00:28:33.632 [2024-11-26 19:29:56.475492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.475525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.475814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.475848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.475985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.476016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.476207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.476239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.476423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.476455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.476572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.476604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.476719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.476752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.476893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.476925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.477044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.477075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.477178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.477210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.477391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.477425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.477527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.477560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.477722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.477757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.478028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.478060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.478240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.478272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.478412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.478444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.478633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.478665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.478804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.478836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.479010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.479042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.479223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.479255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.479436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.479468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.479644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.479686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.479802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.479835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.480029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.480060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.480241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.480272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.480516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.480548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.480690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.480723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.480915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.480948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.481201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.481234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.481481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.481513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.481707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.481741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.481980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.482013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.482207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.482239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.482359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.482392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.482575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.482607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.482846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.482879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.483121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.483154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.483341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-11-26 19:29:56.483372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.633 qpair failed and we were unable to recover it. 00:28:33.633 [2024-11-26 19:29:56.483545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.483582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.483709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.483742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.483857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.483889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.484149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.484181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.484295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.484327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.484522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.484554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.484703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.484736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.484856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.484887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.485079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.485112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.485249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.485281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.485463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.485494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.485736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.485771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.485952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.485984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.486273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.486305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.486556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.486589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.486796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.486829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.487001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.487035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.487276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.487307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.487597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.487629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.487762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.487796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.487985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.488016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.488252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.488284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.488402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.488433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.488616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.488649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.488834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.488867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.489047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.489078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.489314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.489346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.489539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.489571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.489754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.489788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.489910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.489941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.490061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.490093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.490280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.490312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.490482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.490514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.490640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.490691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.634 qpair failed and we were unable to recover it. 00:28:33.634 [2024-11-26 19:29:56.490934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.634 [2024-11-26 19:29:56.490966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.491077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.491109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.491303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.491334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.491507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.491539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.491709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.491744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.491923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.491953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.492137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.492176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.492411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.492444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.492549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.492582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.492749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.492782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.492960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.492992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.493115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.493148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.493330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.493362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.493632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.493664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.493870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.493903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.494019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.494051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.494254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.494285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.494411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.494443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.494619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.494652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.494794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.494827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.495024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.495056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.495178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.495212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.495335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.495366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.495550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.495584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.495707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.495741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.495876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.495909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.496149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.496181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.496361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.496394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.496578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.496612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.496835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.496871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.497001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.497034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.497301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.497333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.497470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.497501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.497621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.497655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.497779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.635 [2024-11-26 19:29:56.497812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.635 qpair failed and we were unable to recover it. 00:28:33.635 [2024-11-26 19:29:56.498002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.498034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.498208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.498239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.498375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.498407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.498521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.498553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.498710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.498744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.498866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.498899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.499098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.499129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.499297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.499329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.499521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.499553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.499691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.499724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.499845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.499878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.500001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.500040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.500281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.500313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.500571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.500604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.500819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.500852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.501042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.501075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.501289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.501320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.501582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.501614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.501813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.501846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.501980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.502012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.502137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.502169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.502359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.502392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.502586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.502618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.502886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.502918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.503050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.503083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.503267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.503298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.503480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.503512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.503697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.503732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.503989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.504022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.504158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.504189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.504405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.504437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.504637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.504680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.504803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.504832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.504956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.504987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.505242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.505273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.505380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.505411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.505698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.505733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.636 [2024-11-26 19:29:56.505918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.636 [2024-11-26 19:29:56.505951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.636 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.506148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.506181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.506315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.506346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.506551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.506584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.506767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.506801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.506973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.507005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.507240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.507272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.507453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.507485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.507606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.507638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.507878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.507910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.508147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.508178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.508351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.508384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.508551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.508585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.508829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.508863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.509001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.509040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.509247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.509280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.509528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.509559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.509748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.509782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.509983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.510015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.510270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.510302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.510430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.510461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.510584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.510616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.510774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.510807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.510994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.511026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.511161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.511192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.511376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.511407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.511521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.511554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.511741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.511774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.511915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.511947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.512131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.512166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.512405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.512437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.512621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.512654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.512857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.512890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.513026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.513058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.513247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.513280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.513499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.513531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.513704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.513738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.513925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.513957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.514140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.637 [2024-11-26 19:29:56.514172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.637 qpair failed and we were unable to recover it. 00:28:33.637 [2024-11-26 19:29:56.514356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.514389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.514582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.514615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.514742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.514774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.514993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.515025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.515223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.515255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.515386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.515419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.515541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.515574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.515744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.515785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.516000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.516032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.516221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.516256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.516388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.516418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.516612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.516642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.516847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.516879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.517051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.517084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.517210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.517241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.517421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.517460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.517741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.517774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.517982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.518014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.518134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.518167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.518299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.518331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.518446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.518478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.518604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.518636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.518803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.518875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.519136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.519173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.519418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.519449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.519560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.519592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.519779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.519811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.519933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.519965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.520081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.520112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.520234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.520265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.520379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.520410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.520595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.520627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.520811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.520844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.521033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.521065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.521303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.521333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.521443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.521475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.521712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.521745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.521918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.521952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.638 qpair failed and we were unable to recover it. 00:28:33.638 [2024-11-26 19:29:56.522218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.638 [2024-11-26 19:29:56.522249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.522369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.522401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.522578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.522609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.522779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.522811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.523085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.523157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.523358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.523393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.523500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.523532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.523707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.523741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.523919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.523952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.524166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.524198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.524383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.524414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.524652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.524693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.524945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.524978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.525169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.525200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.525313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.525344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.525521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.525553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.525746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.525779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.526021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.526059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.526235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.526268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.526473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.526504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.526704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.526738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.526870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.526904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.527170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.527201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.527443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.527475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.527592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.527625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.527770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.527802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.528045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.528078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.528226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.528259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.528440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.528471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.528607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.528639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.528832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.528865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.529132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.529164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.529283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.529315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.529493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.639 [2024-11-26 19:29:56.529524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.639 qpair failed and we were unable to recover it. 00:28:33.639 [2024-11-26 19:29:56.529707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.529741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.529918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.529950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.530082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.530113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.530286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.530318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.530431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.530463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.530632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.530663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.530818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.530850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.531093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.531125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.531249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.531281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.531458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.531490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.531652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.531733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.531913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.532131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.532164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.532287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.532318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.532602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.532634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.532820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.532853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.533053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.533085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.533224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.533255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.533450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.533481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.533649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.533694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.533811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.533840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.534024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.534054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.534180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.534210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.534348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.534379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.534567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.534597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.534773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.534806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.534997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.535029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.535164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.535195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.535364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.535394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.535573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.535605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.535780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.640 [2024-11-26 19:29:56.535813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.640 qpair failed and we were unable to recover it. 00:28:33.640 [2024-11-26 19:29:56.536017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.536049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.536164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.536196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.536468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.536498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.536690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.536722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.536848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.536878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.537051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.537082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.537258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.537296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.537472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.537502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.537610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.537639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.537760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.537797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.537922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.537952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.538137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.538170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.538351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.538383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.538518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.538549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.538817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.538850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.539134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.539165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.539283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.539315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.539441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.539472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.539733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.539766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.539951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.539982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.540177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.540209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.540395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.540427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.540605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.540636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.540832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.540864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.541051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.541082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.541209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.541240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.541442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.541473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.541655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.541694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.541892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.541923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.542106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.542138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.542308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.542340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.542527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.542558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.542741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.542774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.542972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.543002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.543125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.543156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.543264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.641 [2024-11-26 19:29:56.543296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.641 qpair failed and we were unable to recover it. 00:28:33.641 [2024-11-26 19:29:56.543410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.543441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.543544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.543576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.543748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.543781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.543977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.544008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.544278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.544311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.544437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.544469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.544689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.544723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.544895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.544927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.545063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.545094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.545268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.545299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.545423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.545460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.545668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.545721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.545835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.545867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.546075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.546107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.546307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.546338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.546454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.546485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.546692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.546725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.546904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.546935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.547059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.547091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.547206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.547237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.547367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.547398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.547663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.547704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.547808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.547840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.548020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.548051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.548225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.548257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.548390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.548422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.548527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.548558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.548727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.548760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.548939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.548971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.549186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.549217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.549327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.549358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.549613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.549645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.549897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.549929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.550071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.550103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.550343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.550374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.642 [2024-11-26 19:29:56.550566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.642 [2024-11-26 19:29:56.550598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.642 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.550862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.550895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.551150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.551220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.551487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.551556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.551818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.551853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.552105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.552136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.552316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.552347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.552526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.552557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.552736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.552768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.552883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.552913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.553042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.553073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.553205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.553235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.553431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.553462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.553649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.553694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.553885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.553914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.554101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.554133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.554269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.554300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.554475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.554506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.554694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.554746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.554858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.554889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.555129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.555162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.555360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.555392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.555528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.555558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.555689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.555721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.555937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.555968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.556146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.556178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.556283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.556315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.556487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.556518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.556647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.556685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.556875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.556909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.557179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.557209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.557413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.557444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.557634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.557665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.557875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.557906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.558089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.558120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.558359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.558391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.558511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.558541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.558664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.643 [2024-11-26 19:29:56.558706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-11-26 19:29:56.558821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.558853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.559051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.559083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.559254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.559286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.559411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.559441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.559644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.559688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.559863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.559892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.560077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.560109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.560238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.560268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.560528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.560560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.560690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.560723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.560840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.560870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.561055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.561086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.561256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.561287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.561583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.561614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.561822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.561854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.561970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.562001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.562105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.562137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.562419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.562450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.562642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.562684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.562874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.562906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.563117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.563148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.563334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.563366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.563550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.563581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.563761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.563794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.564010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.564041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.564275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.564306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.564489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.564520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.564705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.564738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.564847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.564879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.565056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.644 [2024-11-26 19:29:56.565088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-11-26 19:29:56.565348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.565379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.565588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.565639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.565789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.565824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.565937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.565968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.566176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.566207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.566379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.566409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.566583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.566615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.566822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.566856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.567045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.567076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.567197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.567227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.567525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.567557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.567758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.567791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.567975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.568005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.568191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.568222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.568407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.568438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.568694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.568727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.568908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.568940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.569076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.569107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.569303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.569333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.569449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.569480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.569594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.569626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.569879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.569911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.570097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.570128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.570247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.570279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.570384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.570416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.570607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.570641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.570813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.570844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.570956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.570986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-11-26 19:29:56.571162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.645 [2024-11-26 19:29:56.571193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.571393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.571424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.571551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.571581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.571775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.571809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.571984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.572014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.572180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.572211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.572391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.572422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.572612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.572641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.572833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.572864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.573104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.573134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.573300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.573331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.573512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.573544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.573729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.573762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.574024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.574061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.574191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.574220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.574334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.574364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.574571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.574602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.574786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.574818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.575063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.575094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.575269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.575301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.575404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.575434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.575556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.575588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.575778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.575810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.576047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.576079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.576279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.576308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.576487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.576518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.576713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.576745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.576947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.576978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.577151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.577181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.577308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.577339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.577596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.577627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.646 qpair failed and we were unable to recover it. 00:28:33.646 [2024-11-26 19:29:56.577816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.646 [2024-11-26 19:29:56.577848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.578041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.578071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.578188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.578219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.578481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.578512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.578629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.578660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.578794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.578825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.579057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.579088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.579271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.579302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.579505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.579536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.579657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.579701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.579894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.579925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.580036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.580067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.580323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.580354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.580472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.580503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.580690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.580723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.580844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.580874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.581000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.581030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.581199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.581230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.581408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.581439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.581568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.581599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.581702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.581735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.581926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.581956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.582136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.582173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.582288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.582319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.582555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.582586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.582800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.582832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.582993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.583103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.583133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.583373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.583404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.583577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.583607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.583869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.583901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.584081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.584112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.584348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.584378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.584550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.584580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.584692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.647 [2024-11-26 19:29:56.584724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.647 qpair failed and we were unable to recover it. 00:28:33.647 [2024-11-26 19:29:56.584829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.584860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.584975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.585107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.585138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.585335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.585366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.585599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.585630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.585819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.585852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.586034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.586065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.586251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.586281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.586454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.586484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.586604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.586635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.586839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.586872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.587053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.587085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.587200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.587231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.587348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.587378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.587633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.587664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.587858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.587889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.588080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.588110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.588357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.588387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.588617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.588648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.588852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.588883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.589006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.589036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.589225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.589256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.589380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.589411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.589583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.589613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.589858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.589890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.590006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.590036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.590150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.590180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.590440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.590476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.590741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.590774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.591023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.591054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.648 qpair failed and we were unable to recover it. 00:28:33.648 [2024-11-26 19:29:56.591255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.648 [2024-11-26 19:29:56.591286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.591495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.591526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.591720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.591752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.591926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.591957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.592139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.592170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.592453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.592484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.592724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.592755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.592958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.592989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.593274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.593304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.593425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.593456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.593661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.593702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.593882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.593915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.594031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.594061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.594179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.594210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.594378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.594409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.594664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.594731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.595011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.595041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.595250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.595281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.595464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.595495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.595613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.595644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.595846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.595876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.595993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.596024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.596304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.596334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.596448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.596479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.596661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.596702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.596888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.596918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.597086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.649 [2024-11-26 19:29:56.597116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.649 qpair failed and we were unable to recover it. 00:28:33.649 [2024-11-26 19:29:56.597250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.597281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.597537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.597568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.597732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.597764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.597949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.597980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.598178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.598208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.598313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.598344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.598522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.598552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.598731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.598764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.598944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.598974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.599089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.599120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.599255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.599298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.599469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.599499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.599765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.599797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.600009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.600040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.600155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.600185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.600297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.600327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.600496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.600527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.600798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.600830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.601024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.601054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.601172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.601203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.601409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.601440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.601572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.601602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.601792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.601824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.602098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.602129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.602319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.602350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.602472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.602503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.602688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.602720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.602906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.602938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.603125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.603155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.603417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.650 [2024-11-26 19:29:56.603447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.650 qpair failed and we were unable to recover it. 00:28:33.650 [2024-11-26 19:29:56.603736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.603769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.603951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.603981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.604100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.604131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.604267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.604297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.604411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.604441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.604566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.604597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.604726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.604758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.604947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.604978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.605150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.605180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.605413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.605443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.605558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.605589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.605832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.605865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.606050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.606080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.606181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.606212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.606395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.606426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.606530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.606562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.606756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.606788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.606976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.607007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.607116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.607147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.607385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.607415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.607586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.607622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.607868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.607901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.608088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.608118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.608377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.608408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.608664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.608704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.608818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.608849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.609049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.609079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.609357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.609389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.609561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.609593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.609768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.609801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.651 [2024-11-26 19:29:56.609928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.651 [2024-11-26 19:29:56.609959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.651 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.610091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.610122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.610298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.610329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.610523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.610554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.610770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.610803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.610924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.610956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.611137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.611168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.611358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.611389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.611650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.611688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.611872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.611903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.612160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.612190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.612383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.612413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.612662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.612702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.612818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.612848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.613117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.613148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.613316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.613346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.613475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.613506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.613753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.613786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.613908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.613939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.614118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.614149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.614386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.614417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.614596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.614626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.614875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.614907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.615021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.615051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.615237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.615268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.615467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.615498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.615747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.615780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.616041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.616071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.616179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.652 [2024-11-26 19:29:56.616209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.652 qpair failed and we were unable to recover it. 00:28:33.652 [2024-11-26 19:29:56.616327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.616358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.616466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.616502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.616694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.616726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.616902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.616933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.617120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.617151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.617267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.617298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.617492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.617523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.617699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.617731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.617924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.617955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.618070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.618100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.618303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.618334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.618526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.618558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.618753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.618785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.618979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.619009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.619113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.619145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.619334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.619364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.619548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.619578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.619767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.619800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.619937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.619967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.620233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.620263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.620504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.620536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.620795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.620827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.620962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.620992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.621278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.621308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.621490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.621520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.621703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.621736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.622000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.622030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.622148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.622178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.622420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.622451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.622630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.622661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.653 qpair failed and we were unable to recover it. 00:28:33.653 [2024-11-26 19:29:56.622849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.653 [2024-11-26 19:29:56.622882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.623018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.623048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.623307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.623337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.623515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.623546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.623806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.623838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.624030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.624061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.624286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.624317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.624487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.624519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.624640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.624676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.624796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.624827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.625014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.625045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.625154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.625317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.625348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.625604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.625635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.625921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.625954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.626078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.626109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.626312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.626343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.626619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.626650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.626870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.626903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.627157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.627188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.627381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.627411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.627512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.627543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.627731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.627764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.627881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.627911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.628141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.628172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.628300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.628331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.628520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.628550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.628677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.628709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.628893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.628924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.629040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.629070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.629252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.629284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.629464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.629495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-11-26 19:29:56.629614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.654 [2024-11-26 19:29:56.629645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.629839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.629870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.629984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.630016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.630276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.630306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.630488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.630518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.630712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.630745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.630920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.630991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.631200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.631236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.631353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.631384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.631622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.631654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.631922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.631955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.632214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.632244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.632485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.632516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.632704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.632736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.632904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.632934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.633109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.633139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.633323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.633354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.633470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.633501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.633691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.633724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.634002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.634048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.634328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.634359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.634481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.634512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.634785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.634818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.635061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.635092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.635328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.635359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.635528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.635558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.635739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.635771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.635902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.635933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.636060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.636091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-11-26 19:29:56.636217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.655 [2024-11-26 19:29:56.636248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.636374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.636405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.636596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.636626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.636754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.636786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.636976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.637007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.637145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.637176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.637361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.637391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.637578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.637609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.637823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.637855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.638113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.638144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.638345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.638377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.638509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.638539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.638815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.638847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.638977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.639009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.639245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.639276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.639414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.639445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.639573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.639604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.639813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.639845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.640081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.640113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.640399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.640431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.640620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.640650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.640845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.640877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.641158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.641189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.641428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.641459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.641743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.641775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.641957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.641987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.642180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.642211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.642446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.642626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.642656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.642769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.642800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.642987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.643024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.643216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.643246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.643502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.643532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.643719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.643752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.644038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.644068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.644360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.644391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.644524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.644555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.644694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.644726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.644988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.645018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-11-26 19:29:56.645202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.656 [2024-11-26 19:29:56.645233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.645360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.645391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.645629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.645660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.645805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.645837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.646078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.646109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.646365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.646396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.646655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.646695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.646812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.646843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.646968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.646999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.647110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.647140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.647306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.647336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.647522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.647554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.647681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.647712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.647883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.647913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.648095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.648126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.648361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.648392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.648506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.648536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.648720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.648752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.648934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.648966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.649201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.649232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.649427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.649458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.649574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.649604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.649787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.649818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.650080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.650110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.650288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.650319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.650431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.650462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.650660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.650700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.650937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.650968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.651136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.651167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.651408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.651438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.651571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.651600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.651771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.651808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.651938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.651969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.652080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.652110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.652208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.652239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.652407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.652438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.652644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.652683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.652813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.652843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.653083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.653114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.653297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.653327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.653451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.653481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.657 qpair failed and we were unable to recover it. 00:28:33.657 [2024-11-26 19:29:56.653690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.657 [2024-11-26 19:29:56.653721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.653900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.653930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.654043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.654074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.654310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.654341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.654460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.654491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.654725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.654757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.654948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.654978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.655107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.655137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.655324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.655357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.655471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.655500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.655610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.655640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.655787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.655820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.656008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.656037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.656217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.656248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.656430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.656460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.656654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.656693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.656812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.656843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.657131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.657200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.657349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.657384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.657567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.657599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.657805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.657839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.658147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.658178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.658291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.658321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.658501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.658533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.658722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.658755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.658961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.658994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.659114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.659145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.659316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.659346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.659533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.659564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.659755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.659787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.659933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.659972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.660234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.660265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.660469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.660499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.660600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.660631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.660830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.660862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.661033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.661065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.661323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.661354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.661534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.661565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.661752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.661786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.661912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.661943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.662059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.662090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.662274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.662305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.662434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.662464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.662636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.658 [2024-11-26 19:29:56.662667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.658 qpair failed and we were unable to recover it. 00:28:33.658 [2024-11-26 19:29:56.662974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.663006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.663245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.663276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.663443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.663474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.663700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.663733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.663920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.663950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.664142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.664172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.664337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.664368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.664551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.664582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.664769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.664801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.665040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.665071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.665260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.665290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.665533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.665564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.665828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.665860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.666104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.666135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.666255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.666287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.666401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.666432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.666690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.666723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.666843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.666874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.667071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.667101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.667215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.667246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.667368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.667399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.667515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.667546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.667726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.667759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.667936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.667967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.668172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.668203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.668461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.668492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.668622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.668664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.668886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.668917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.669126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.669157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.669416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.669448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.669704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.669735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.669855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.669885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.670055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.670085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.670355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.670386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.659 [2024-11-26 19:29:56.670583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.659 [2024-11-26 19:29:56.670613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.659 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.670788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.670820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.671012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.671043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.671212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.671243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.671420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.671451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.671619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.671649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.671937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.671970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.672209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.672239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.672427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.672457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.672666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.672710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.672953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.672984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.673224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.673256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.673441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.673472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.673657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.673707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.673837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.673867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.674073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.674103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.674341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.674372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.674553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.674583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.674714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.674747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.674933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.675004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.675218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.675254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.675384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.675416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.675594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.675625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.675815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.675849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.676032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.676063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.676178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.676209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.676492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.676523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.676627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.676658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.676806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.676838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.676953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.676985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.677089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.677121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.677247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.677278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.677401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.677432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.677706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.677740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.677985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.678017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.678140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.678172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.678342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.678373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.660 qpair failed and we were unable to recover it. 00:28:33.660 [2024-11-26 19:29:56.678589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.660 [2024-11-26 19:29:56.678621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.678872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.678905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.679165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.679197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.679365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.679397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.679611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.679643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.679934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.679966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.680224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.680255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.680491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.680523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.680810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.680843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.681037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.681069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.681331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.681363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.681491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.681522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.681711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.681745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.682043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.682074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.682279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.682311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.682446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.682478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.682590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.682622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.682867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.682900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.683161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.683194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.683367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.683398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.683579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.683611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.683751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.683784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.683901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.683939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.684128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.684159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.684352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.684383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.684563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.684595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.684765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.684800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.684982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.685013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.685143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.685174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.685306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.685338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.685448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.685479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.685654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.685696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.685803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.685835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.686006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.686037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.661 [2024-11-26 19:29:56.686274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.661 [2024-11-26 19:29:56.686304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.661 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.686502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.686533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.686711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.686745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.686880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.686912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.687080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.687111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.687310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.687341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.687512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.687544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.687727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.687760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.687881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.687913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.688101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.688132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.688323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.688355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.688641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.688680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.688796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.688827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.689019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.689049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.689242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.689273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.689458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.689489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.689653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.689694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.689898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.689929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.690121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.690152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.690387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.690418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.690600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.690631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.690848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.690881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.691000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.691031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.691274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.691305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.691476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.691508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.691687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.691720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.691840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.691872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.692057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.692088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.692322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.692359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.692568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.692601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.692823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.692856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.693041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.693073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.693271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.693302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.693511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.693542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.693785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.693818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.694070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.694102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.694295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.694326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.694453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.662 [2024-11-26 19:29:56.694485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-11-26 19:29:56.694654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.694696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.694859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.694890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.695059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.695090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.695277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.695317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.695531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.695563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.695802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.695835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.696020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.696051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.696230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.696260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.696432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.696463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.696637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.696679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.696862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.696894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.697009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.697040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.697158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.697189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.697313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.697345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.697463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.697494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.697733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.697767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.698027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.698058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.698279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.698311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.698554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.698587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.698763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.698797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.699053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.699084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.699266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.699297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.699520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.699552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.699813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.699854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.699970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.700001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.700170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.700202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.700322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.700353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.700542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.700574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.700707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.700741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.700925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.700956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.701091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.701128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.701310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.701342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.701538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.701568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.701747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.701781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.701978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.702010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.702200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.702231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.702434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.702466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.702684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.702717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.702918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.702950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.703136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.703167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.703280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.703311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.703505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.663 [2024-11-26 19:29:56.703537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-11-26 19:29:56.703758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.703790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.703981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.704013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.704202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.704234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.704495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.704526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.704766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.704799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.704928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.704958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.705143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.705174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.705355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.705386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.705630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.705662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.705794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.705826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.705939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.705971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.706232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.706263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.706505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.706536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.706662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.706704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.706886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.706917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.707044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.707077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.707253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.707285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.707475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.707505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.707626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.707658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.707852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.707884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.708128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.708159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.708370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.708401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.708590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.708622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.708897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.708929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.709117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.709149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.709413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.709444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.709654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.709712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.709950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.709981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.664 [2024-11-26 19:29:56.710165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.664 [2024-11-26 19:29:56.710202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.664 qpair failed and we were unable to recover it. 00:28:33.957 [2024-11-26 19:29:56.710442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.957 [2024-11-26 19:29:56.710473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.957 qpair failed and we were unable to recover it. 00:28:33.957 [2024-11-26 19:29:56.710729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.957 [2024-11-26 19:29:56.710762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.957 qpair failed and we were unable to recover it. 00:28:33.957 [2024-11-26 19:29:56.710945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.957 [2024-11-26 19:29:56.710977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.957 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.711148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.711180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.711352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.711384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.711510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.711541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.711800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.711832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.712003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.712034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.712275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.712306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.712541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.712572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.712789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.712821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.712936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.712968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.713162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.713194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.713436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.713467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.713639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.713677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.713824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.713854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.713988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.714020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.714148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.714179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.714426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.714458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.714635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.714666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.714938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.714970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.715072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.715103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.715368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.715399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.715679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.715712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.715816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.715848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.716092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.716123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.716335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.716368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.716627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.716658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.716860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.716892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.717017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.717048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.717332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.717363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.717553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.717584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.717777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.958 [2024-11-26 19:29:56.717811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.958 qpair failed and we were unable to recover it. 00:28:33.958 [2024-11-26 19:29:56.717981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.718012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.718137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.718168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.718429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.718461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.718579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.718610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.718804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.718838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.718959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.718991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.719227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.719265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.719394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.719425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.719615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.719646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.719900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.719931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.720109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.720141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.720390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.720421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.720633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.720664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.720878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.720910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.721148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.721180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.721386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.721417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.721549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.721581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.959 qpair failed and we were unable to recover it. 00:28:33.959 [2024-11-26 19:29:56.721847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.959 [2024-11-26 19:29:56.721881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.960 qpair failed and we were unable to recover it. 00:28:33.960 [2024-11-26 19:29:56.722088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.960 [2024-11-26 19:29:56.722119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.960 qpair failed and we were unable to recover it. 00:28:33.960 [2024-11-26 19:29:56.722244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.960 [2024-11-26 19:29:56.722275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.960 qpair failed and we were unable to recover it. 00:28:33.960 [2024-11-26 19:29:56.722414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.960 [2024-11-26 19:29:56.722445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.960 qpair failed and we were unable to recover it. 00:28:33.960 [2024-11-26 19:29:56.722648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.960 [2024-11-26 19:29:56.722691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.960 qpair failed and we were unable to recover it. 00:28:33.960 [2024-11-26 19:29:56.722825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.960 [2024-11-26 19:29:56.722856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.960 qpair failed and we were unable to recover it. 00:28:33.960 [2024-11-26 19:29:56.723040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.960 [2024-11-26 19:29:56.723071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.960 qpair failed and we were unable to recover it. 00:28:33.960 [2024-11-26 19:29:56.723306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.960 [2024-11-26 19:29:56.723337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.960 qpair failed and we were unable to recover it. 00:28:33.960 [2024-11-26 19:29:56.723452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.960 [2024-11-26 19:29:56.723483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.961 qpair failed and we were unable to recover it. 00:28:33.961 [2024-11-26 19:29:56.723659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.961 [2024-11-26 19:29:56.723719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.961 qpair failed and we were unable to recover it. 00:28:33.961 [2024-11-26 19:29:56.723843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.961 [2024-11-26 19:29:56.723874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.961 qpair failed and we were unable to recover it. 00:28:33.961 [2024-11-26 19:29:56.723988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.961 [2024-11-26 19:29:56.724020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.961 qpair failed and we were unable to recover it. 00:28:33.961 [2024-11-26 19:29:56.724303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.961 [2024-11-26 19:29:56.724333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.961 qpair failed and we were unable to recover it. 00:28:33.961 [2024-11-26 19:29:56.724542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.961 [2024-11-26 19:29:56.724573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.961 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.724747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.724781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.724914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.724945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.725069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.725101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.725271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.725302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.725473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.725505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.725624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.725654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.725840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.725872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.726144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.726175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.726292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.726323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.726495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.726525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.726766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.726799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.726926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.726957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.727075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.727106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.727286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.727317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.727492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.727523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.727708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.727746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.727865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.727896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.728075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.728107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.728294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.728324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.728504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.728535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.728716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.728749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.729007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.729038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.729208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.729241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.729412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.729444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.729566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.729596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.729802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.729835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.730014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.730045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.730307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.962 [2024-11-26 19:29:56.730339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.962 qpair failed and we were unable to recover it. 00:28:33.962 [2024-11-26 19:29:56.730531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.730563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.730748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.730782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.730909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.730940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.731229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.731261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.731452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.731484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.731662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.731731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.731914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.731945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.732058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.732090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.732276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.732307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.732435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.732467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.732712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.732745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.733041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.733072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.733267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.733299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.733429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.733460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.733603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.733636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.733888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.733921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.734129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.734160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.734348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.734380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.734547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.734578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.734687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.734720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.734891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.734922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.735126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.735157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.735429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.735460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.735607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.735639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.735835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.735868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.736000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.736032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.736298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.736329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.736543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.736581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.736821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.736855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.736964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.736995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.737200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.737231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.737359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.737391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.737562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.737592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.737775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.737808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.737995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.738027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.738141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.738172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.738278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.963 [2024-11-26 19:29:56.738310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.963 qpair failed and we were unable to recover it. 00:28:33.963 [2024-11-26 19:29:56.738549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.738580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.738761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.738793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.738912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.738943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.739116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.739148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.739337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.739368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.739608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.739641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.739779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.739812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.739999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.740031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.740220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.740251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.740368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.740399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.740599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.740631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.740880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.740912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.741096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.741128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.741255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.741287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.741484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.741515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.741625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.741657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.741925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.741957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.742140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.742172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.742352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.742383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.742570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.742601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.742860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.742894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.743165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.743197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.743375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.743406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.743595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.743626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.743840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.743873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.744044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.744076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.744216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.744247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.744360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.744392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.744560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.744591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.744818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.744852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.745044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.745080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.745215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.745247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.745481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.745513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.745629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.745660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.745916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.745949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.746128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.746159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.746329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.746360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.964 [2024-11-26 19:29:56.746546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.964 [2024-11-26 19:29:56.746577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.964 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.746819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.746853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.747041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.747072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.747317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.747349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.747461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.747492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.747757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.747789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.747998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.748030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.748213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.748245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.748446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.748478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.748662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.748703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.748941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.748973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.749163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.749195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.749452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.749483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.749587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.749619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.749753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.749785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.750045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.750076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.750316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.750348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.965 [2024-11-26 19:29:56.750530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.965 [2024-11-26 19:29:56.750561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.965 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.750693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.750726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.750976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.751007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.751130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.751162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.751335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.751366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.751490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.751522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.751636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.751667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.751929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.751961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.752149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.752180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.752457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.752488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.752701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.752733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.752860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.752891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.753008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.753040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.753173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.753205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.753384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.753416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.753625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.753657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.753855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.753893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.754011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.754043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.966 [2024-11-26 19:29:56.754321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.966 [2024-11-26 19:29:56.754352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.966 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.754476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.754507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.754722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.754756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.754877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.754909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.755194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.755226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.755360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.755392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.755595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.755628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.755819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.755852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.756042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.756074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.756188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.756220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.756394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.967 [2024-11-26 19:29:56.756427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.967 qpair failed and we were unable to recover it. 00:28:33.967 [2024-11-26 19:29:56.756556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.968 [2024-11-26 19:29:56.756587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.968 qpair failed and we were unable to recover it. 00:28:33.968 [2024-11-26 19:29:56.756782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.968 [2024-11-26 19:29:56.756816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.968 qpair failed and we were unable to recover it. 00:28:33.968 [2024-11-26 19:29:56.756942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.968 [2024-11-26 19:29:56.756973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.968 qpair failed and we were unable to recover it. 00:28:33.968 [2024-11-26 19:29:56.757096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.968 [2024-11-26 19:29:56.757128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.969 qpair failed and we were unable to recover it. 00:28:33.969 [2024-11-26 19:29:56.757389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.969 [2024-11-26 19:29:56.757420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.969 qpair failed and we were unable to recover it. 00:28:33.969 [2024-11-26 19:29:56.757624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.969 [2024-11-26 19:29:56.757657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.969 qpair failed and we were unable to recover it. 00:28:33.969 [2024-11-26 19:29:56.757904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.969 [2024-11-26 19:29:56.757937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.969 qpair failed and we were unable to recover it. 00:28:33.969 [2024-11-26 19:29:56.758071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.969 [2024-11-26 19:29:56.758102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.969 qpair failed and we were unable to recover it. 00:28:33.969 [2024-11-26 19:29:56.758224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.969 [2024-11-26 19:29:56.758256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.969 qpair failed and we were unable to recover it. 00:28:33.970 [2024-11-26 19:29:56.758381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.970 [2024-11-26 19:29:56.758414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.970 qpair failed and we were unable to recover it. 00:28:33.970 [2024-11-26 19:29:56.758584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.970 [2024-11-26 19:29:56.758615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.970 qpair failed and we were unable to recover it. 00:28:33.970 [2024-11-26 19:29:56.758792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.970 [2024-11-26 19:29:56.758826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.970 qpair failed and we were unable to recover it. 00:28:33.970 [2024-11-26 19:29:56.759021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.970 [2024-11-26 19:29:56.759052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.971 qpair failed and we were unable to recover it. 00:28:33.971 [2024-11-26 19:29:56.759264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.971 [2024-11-26 19:29:56.759295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.971 qpair failed and we were unable to recover it. 00:28:33.971 [2024-11-26 19:29:56.759419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.971 [2024-11-26 19:29:56.759452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.971 qpair failed and we were unable to recover it. 00:28:33.971 [2024-11-26 19:29:56.759648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.971 [2024-11-26 19:29:56.759689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.971 qpair failed and we were unable to recover it. 00:28:33.971 [2024-11-26 19:29:56.759952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.971 [2024-11-26 19:29:56.759984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.971 qpair failed and we were unable to recover it. 00:28:33.971 [2024-11-26 19:29:56.760105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.971 [2024-11-26 19:29:56.760137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.971 qpair failed and we were unable to recover it. 00:28:33.971 [2024-11-26 19:29:56.760308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.971 [2024-11-26 19:29:56.760341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.971 qpair failed and we were unable to recover it. 00:28:33.971 [2024-11-26 19:29:56.760525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.971 [2024-11-26 19:29:56.760557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.972 qpair failed and we were unable to recover it. 00:28:33.972 [2024-11-26 19:29:56.760737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.972 [2024-11-26 19:29:56.760769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.972 qpair failed and we were unable to recover it. 00:28:33.972 [2024-11-26 19:29:56.760972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.972 [2024-11-26 19:29:56.761003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.972 qpair failed and we were unable to recover it. 00:28:33.972 [2024-11-26 19:29:56.761134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.972 [2024-11-26 19:29:56.761165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.972 qpair failed and we were unable to recover it. 00:28:33.972 [2024-11-26 19:29:56.761290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.972 [2024-11-26 19:29:56.761321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.972 qpair failed and we were unable to recover it. 00:28:33.972 [2024-11-26 19:29:56.761441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.972 [2024-11-26 19:29:56.761473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.972 qpair failed and we were unable to recover it. 00:28:33.972 [2024-11-26 19:29:56.761640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.972 [2024-11-26 19:29:56.761679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.972 qpair failed and we were unable to recover it. 00:28:33.973 [2024-11-26 19:29:56.761859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.973 [2024-11-26 19:29:56.761891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.973 qpair failed and we were unable to recover it. 00:28:33.973 [2024-11-26 19:29:56.761997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.973 [2024-11-26 19:29:56.762034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.973 qpair failed and we were unable to recover it. 00:28:33.973 [2024-11-26 19:29:56.762225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.973 [2024-11-26 19:29:56.762258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.973 qpair failed and we were unable to recover it. 00:28:33.973 [2024-11-26 19:29:56.762496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.973 [2024-11-26 19:29:56.762528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.973 qpair failed and we were unable to recover it. 00:28:33.973 [2024-11-26 19:29:56.762759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.973 [2024-11-26 19:29:56.762792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.974 qpair failed and we were unable to recover it. 00:28:33.974 [2024-11-26 19:29:56.762967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.974 [2024-11-26 19:29:56.762999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.974 qpair failed and we were unable to recover it. 00:28:33.974 [2024-11-26 19:29:56.763195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.975 [2024-11-26 19:29:56.763227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.975 qpair failed and we were unable to recover it. 00:28:33.975 [2024-11-26 19:29:56.763350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.975 [2024-11-26 19:29:56.763381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.975 qpair failed and we were unable to recover it. 00:28:33.975 [2024-11-26 19:29:56.763484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.975 [2024-11-26 19:29:56.763516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.975 qpair failed and we were unable to recover it. 00:28:33.975 [2024-11-26 19:29:56.763708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.975 [2024-11-26 19:29:56.763741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.975 qpair failed and we were unable to recover it. 00:28:33.975 [2024-11-26 19:29:56.763933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.975 [2024-11-26 19:29:56.763965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.975 qpair failed and we were unable to recover it. 00:28:33.975 [2024-11-26 19:29:56.764164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.975 [2024-11-26 19:29:56.764195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.975 qpair failed and we were unable to recover it. 00:28:33.975 [2024-11-26 19:29:56.764370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.975 [2024-11-26 19:29:56.764402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.975 qpair failed and we were unable to recover it. 00:28:33.975 [2024-11-26 19:29:56.764699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.975 [2024-11-26 19:29:56.764733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.975 qpair failed and we were unable to recover it. 00:28:33.975 [2024-11-26 19:29:56.764915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.764947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.976 [2024-11-26 19:29:56.765072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.765104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.976 [2024-11-26 19:29:56.765269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.765301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.976 [2024-11-26 19:29:56.765407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.765440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.976 [2024-11-26 19:29:56.765615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.765647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.976 [2024-11-26 19:29:56.765844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.765876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.976 [2024-11-26 19:29:56.766076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.766109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.976 [2024-11-26 19:29:56.766214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.766246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.976 [2024-11-26 19:29:56.766371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.976 [2024-11-26 19:29:56.766402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.976 qpair failed and we were unable to recover it. 00:28:33.977 [2024-11-26 19:29:56.766576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.977 [2024-11-26 19:29:56.766608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.977 qpair failed and we were unable to recover it. 00:28:33.977 [2024-11-26 19:29:56.766713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.977 [2024-11-26 19:29:56.766744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.977 qpair failed and we were unable to recover it. 00:28:33.977 [2024-11-26 19:29:56.766983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.977 [2024-11-26 19:29:56.767015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.977 qpair failed and we were unable to recover it. 00:28:33.977 [2024-11-26 19:29:56.767198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.977 [2024-11-26 19:29:56.767230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.977 qpair failed and we were unable to recover it. 00:28:33.977 [2024-11-26 19:29:56.767370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.977 [2024-11-26 19:29:56.767402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.977 qpair failed and we were unable to recover it. 00:28:33.977 [2024-11-26 19:29:56.767610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.977 [2024-11-26 19:29:56.767691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.977 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.767837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.767873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.768050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.768084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.768262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.768293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.768471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.768503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.768638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.768668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.768870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.768901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.769019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.769051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.769252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.769284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.769545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.769576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.979 qpair failed and we were unable to recover it. 00:28:33.979 [2024-11-26 19:29:56.769718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.979 [2024-11-26 19:29:56.769751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.769868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.769901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.770078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.770111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.770309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.770339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.770468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.770498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.770687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.770720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.770978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.771009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.771196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.771230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.771454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.771486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.980 [2024-11-26 19:29:56.771609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.980 [2024-11-26 19:29:56.771639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.980 qpair failed and we were unable to recover it. 00:28:33.981 [2024-11-26 19:29:56.771778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.981 [2024-11-26 19:29:56.771810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.981 qpair failed and we were unable to recover it. 00:28:33.981 [2024-11-26 19:29:56.771984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.981 [2024-11-26 19:29:56.772015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.981 qpair failed and we were unable to recover it. 00:28:33.981 [2024-11-26 19:29:56.772130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.981 [2024-11-26 19:29:56.772161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.981 qpair failed and we were unable to recover it. 00:28:33.981 [2024-11-26 19:29:56.772425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.981 [2024-11-26 19:29:56.772458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.981 qpair failed and we were unable to recover it. 00:28:33.981 [2024-11-26 19:29:56.772576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.981 [2024-11-26 19:29:56.772605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.981 qpair failed and we were unable to recover it. 00:28:33.981 [2024-11-26 19:29:56.772737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.981 [2024-11-26 19:29:56.772770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.981 qpair failed and we were unable to recover it. 00:28:33.981 [2024-11-26 19:29:56.772902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.981 [2024-11-26 19:29:56.772933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.981 qpair failed and we were unable to recover it. 00:28:33.981 [2024-11-26 19:29:56.773056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.773091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.773272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.773303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.773469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.773501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.773629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.773659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.773796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.773827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.774009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.774040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.774228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.774260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.774396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.774426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.774555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.982 [2024-11-26 19:29:56.774587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.982 qpair failed and we were unable to recover it. 00:28:33.982 [2024-11-26 19:29:56.774703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.984 [2024-11-26 19:29:56.774735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.984 qpair failed and we were unable to recover it. 00:28:33.985 [2024-11-26 19:29:56.774921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.985 [2024-11-26 19:29:56.774953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.985 qpair failed and we were unable to recover it. 00:28:33.985 [2024-11-26 19:29:56.775057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.985 [2024-11-26 19:29:56.775087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.985 qpair failed and we were unable to recover it. 00:28:33.985 [2024-11-26 19:29:56.775205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.985 [2024-11-26 19:29:56.775234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.985 qpair failed and we were unable to recover it. 00:28:33.985 [2024-11-26 19:29:56.775353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.985 [2024-11-26 19:29:56.775383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.985 qpair failed and we were unable to recover it. 00:28:33.985 [2024-11-26 19:29:56.775651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.985 [2024-11-26 19:29:56.775690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.985 qpair failed and we were unable to recover it. 00:28:33.985 [2024-11-26 19:29:56.775882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.985 [2024-11-26 19:29:56.775913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.985 qpair failed and we were unable to recover it. 00:28:33.985 [2024-11-26 19:29:56.776034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.985 [2024-11-26 19:29:56.776066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.985 qpair failed and we were unable to recover it. 00:28:33.986 [2024-11-26 19:29:56.776234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.986 [2024-11-26 19:29:56.776265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.986 qpair failed and we were unable to recover it. 00:28:33.986 [2024-11-26 19:29:56.776523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.986 [2024-11-26 19:29:56.776553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.986 qpair failed and we were unable to recover it. 00:28:33.986 [2024-11-26 19:29:56.776722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.986 [2024-11-26 19:29:56.776756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.986 qpair failed and we were unable to recover it. 00:28:33.986 [2024-11-26 19:29:56.777041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.986 [2024-11-26 19:29:56.777073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.986 qpair failed and we were unable to recover it. 00:28:33.986 [2024-11-26 19:29:56.777251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.986 [2024-11-26 19:29:56.777281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.986 qpair failed and we were unable to recover it. 00:28:33.986 [2024-11-26 19:29:56.777571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.986 [2024-11-26 19:29:56.777604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.986 qpair failed and we were unable to recover it. 00:28:33.986 [2024-11-26 19:29:56.777735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.986 [2024-11-26 19:29:56.777768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.986 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.777881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.777913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.778117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.778148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.778267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.778297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.778560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.778597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.778778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.778810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.778923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.778954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.779125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.779156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.779418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.779449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.779631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.779662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.779786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.779817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.780031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.780064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.780252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.780283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.780396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.780428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.780713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.780745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.780887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.780917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.781106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.781136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.781255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.781287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.781538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.781568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.781680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.781713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.781829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.781860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.781967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.987 [2024-11-26 19:29:56.781998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.987 qpair failed and we were unable to recover it. 00:28:33.987 [2024-11-26 19:29:56.782237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.782268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.782463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.782496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.782628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.782658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.782843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.782873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.782994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.783027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.783200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.783231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.783357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.783387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.783580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.783612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.783748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.783780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.783895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.783933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.784170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.784201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.784395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.784427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.784663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.784707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.784885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.784916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.785093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.785123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.785297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.785329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.785519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.785550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.785724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.785757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.785887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.785919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.786020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.786051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.786247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.786278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.786413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.786445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.786619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.786658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.786800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.786832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.787022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.787053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.787170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.787201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.787371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.787403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.787573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.787605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.787859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.787891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.788135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.788165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.788348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.788379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.788482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.788514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.788636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.788666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.788798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.788829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.788944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.788975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.988 qpair failed and we were unable to recover it. 00:28:33.988 [2024-11-26 19:29:56.789156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.988 [2024-11-26 19:29:56.789188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.789291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.789322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.789529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.789560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.789741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.789775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.789942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.789974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.790149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.790181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.790449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.790481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.790684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.790716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.790919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.790950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.791063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.791094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.791202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.791233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.791467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.791499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.791761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.791794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.791921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.791953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.792147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.792177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.792227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c57b20 (9): Bad file descriptor 00:28:33.989 [2024-11-26 19:29:56.792547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.792617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.792783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.792824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.793010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.793041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.793281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.793312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.793427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.793461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.793646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.793689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.793867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.793900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.794011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.794043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.794171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.794204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.794446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.794478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.794699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.794731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.794905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.794938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.795075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.795107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.795448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.795518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.795781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.795818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.795950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.795982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.796159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.796190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.796394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.796425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.796563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.796593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.796714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.796747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.796961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.796993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.797182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.797213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.797352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.797383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.797585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.797616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.797811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.797844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.798028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.798059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.798234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.798274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.798388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.798420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.798615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.798645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.798834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.798867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.798991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.799021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.799196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.799227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.799355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.799386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.799583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.799614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.799883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.799915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.989 qpair failed and we were unable to recover it. 00:28:33.989 [2024-11-26 19:29:56.800206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.989 [2024-11-26 19:29:56.800237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.800370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.800402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.800533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.800563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.800761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.800795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.800919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.800951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.801138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.801170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.801389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.801422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.801541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.801573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.801755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.801787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.802007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.802038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.802152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.802184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.802418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.802449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.802706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.802737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.802878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.802911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.803035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.803066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.803346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.803376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.803495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.803527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.803714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.803746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.803864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.803896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.804002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.804033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.804276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.804306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.804498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.804530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.804725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.804756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.804877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.804908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.805089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.805120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.805328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.805358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.805531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.805563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.805750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.805783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.805981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.806013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.806132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.806162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.806348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.806378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.806591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.806628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.806824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.806857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.807032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.807062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.807241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.807273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.807403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.807434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.807646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.807685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.807809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.807840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.808040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.990 [2024-11-26 19:29:56.808070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.990 qpair failed and we were unable to recover it. 00:28:33.990 [2024-11-26 19:29:56.808186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.808217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.808337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.808368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.808557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.808589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.808861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.808894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.809088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.809119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.809326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.809357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.809564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.809596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.809818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.809850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.810095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.810127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.810245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.810276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.810452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.810483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.810687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.810721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.810835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.810866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.811000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.811031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.811270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.811301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.811551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.811581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.811701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.811733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.811928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.811960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.812082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.812112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.812301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.812333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.812589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.812620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.812749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.812780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.812975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.813006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.813184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.813215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.813389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.813419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.813602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.813634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.813831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.813863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.814058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.814089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.814350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.814381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.814564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.814595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.814715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.814755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.814894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.814927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.815164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.815206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.815320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.815351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.815533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.815565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.815689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.815722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.815900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.815931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.816123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.816153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.816401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.816431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.816562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.816594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.816802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.816834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.816959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.816990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.817110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.817141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.817268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.817300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.817510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.817541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.991 qpair failed and we were unable to recover it. 00:28:33.991 [2024-11-26 19:29:56.817732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.991 [2024-11-26 19:29:56.817764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.817883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.817915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.818157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.818189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.818437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.818468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.818663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.818705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.818839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.818870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.819130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.819162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.819286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.819316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.819421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.819453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.819579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.819611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.819811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.819842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.820104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.820135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.820327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.820358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.820543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.820575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.820709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.820742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.820978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.821010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.821196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.821227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.821342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.821373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.821488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.821519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.821691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.821723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.821853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.821884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.822057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.822088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.822196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.822227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.822344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.822375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.822567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.822599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.822839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.822872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.823053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.823084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.823348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.823384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.823582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.823614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.823746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.823778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.823970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.824001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.824236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.824267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.824529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.992 [2024-11-26 19:29:56.824561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.992 qpair failed and we were unable to recover it. 00:28:33.992 [2024-11-26 19:29:56.824693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.824726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.824861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.824893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.824997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.825028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.825204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.825235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.825342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.825373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.825490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.825521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.825690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.825721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.825824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.825855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.826112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.826145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.826323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.826355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.826541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.826572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.826764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.826797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.826925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.826956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.827065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.827096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.827283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.827315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.827491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.827522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.827784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.827816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.828000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.828031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.828218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.828250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.828421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.828452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.828710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.828743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.828862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.828894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.829016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.829048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.829159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.829191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.829322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.829352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.829540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.829571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.829744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.829778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.829948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.829980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.830154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.830185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.830368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.830401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.830527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.830558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.830810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.830843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.831032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.831064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.831234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.831264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.831383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.831420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.831627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.831658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.831793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.831824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.831999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.832029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.832133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.832164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.832339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.832371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.832618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.832649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.832845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.832879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.833057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.833088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.833329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.833361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.833530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.833560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.833719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.833753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.833936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.833968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.834202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.993 [2024-11-26 19:29:56.834232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.993 qpair failed and we were unable to recover it. 00:28:33.993 [2024-11-26 19:29:56.834356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.834389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.834513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.834545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.834661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.834700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.834895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.834926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.835042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.835073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.835191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.835223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.835410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.835441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.835551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.835582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.835751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.835783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.835892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.835923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.836157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.836188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.836305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.836336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.836510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.836542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.836792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.836824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.836931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.836962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.837231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.837263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.837397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.837428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.837541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.837574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.837747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.837779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.837899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.837930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.838045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.838075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.838253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.838284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.838482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.838512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.838689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.838720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.838984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.839016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.839189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.839219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.839361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.839392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.839583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.839615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.839755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.839788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.839967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.839999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.840186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.840216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.840333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.840364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.840540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.840571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.840755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.840788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.840901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.840932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.994 [2024-11-26 19:29:56.841101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.994 [2024-11-26 19:29:56.841132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.994 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.841258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.841289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.841468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.841499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.841716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.841751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.841922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.841953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.842141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.842174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.842362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.842394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.842519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.842550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.842721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.842753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.842954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.842985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.843180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.843210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.843399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.843430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.843689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.843721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.843861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.843891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.844018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.844049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.844223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.844254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.844435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.844465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.844681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.844714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.844840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.844878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.845048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.845079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.845252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.845284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.845453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.845484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.845632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.845663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.845805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.845837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.845945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.845976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.846147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.846179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.846391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.846422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.846605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.846636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.995 [2024-11-26 19:29:56.846833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.995 [2024-11-26 19:29:56.846865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.995 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.847104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.847134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.847246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.847277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.847461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.847492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.847614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.847645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.847835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.847868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.848008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.848038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.848285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.848316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.848429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.848460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.848639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.848679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.848914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.848945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.849062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.849094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.849261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.849292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.849412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.849445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.849612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.849642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.849982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.850053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.850200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.850234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.850440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.850472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.850598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.850631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.850758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.850790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.850969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.851002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.851135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.851168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.851313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.851345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.851538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.851568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.851831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.996 [2024-11-26 19:29:56.851864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.996 qpair failed and we were unable to recover it. 00:28:33.996 [2024-11-26 19:29:56.852058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.852090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.852226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.852258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.852368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.852400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.852526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.852557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.852793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.852826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.853010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.853048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.853170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.853201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.853370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.853401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.853573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.853605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.853805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.853838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.854040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.854072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.854261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.854293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.854423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.854453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.854695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.854728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.854901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.854935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.855173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.855205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.855393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.855424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.855550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.855582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.855696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.855729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.855993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.856024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.856160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.856191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.856367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.856399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.856578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.856609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.856740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.856772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.857033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.857064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.857187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.857218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.857325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.857357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.857479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.857511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.857691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.857724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.857854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.857885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.858062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.858094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.858270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.858301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.858571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.858604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.858711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.858744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.858995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.859028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.859153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.859183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.859294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.859326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.859591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.997 [2024-11-26 19:29:56.859623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.997 qpair failed and we were unable to recover it. 00:28:33.997 [2024-11-26 19:29:56.859826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.998 [2024-11-26 19:29:56.859859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.998 qpair failed and we were unable to recover it. 00:28:33.998 [2024-11-26 19:29:56.860057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.998 [2024-11-26 19:29:56.860088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.998 qpair failed and we were unable to recover it. 00:28:33.998 [2024-11-26 19:29:56.860200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.998 [2024-11-26 19:29:56.860231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.998 qpair failed and we were unable to recover it. 00:28:33.998 [2024-11-26 19:29:56.860500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.998 [2024-11-26 19:29:56.860532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.998 qpair failed and we were unable to recover it. 00:28:33.998 [2024-11-26 19:29:56.860646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.998 [2024-11-26 19:29:56.860685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.998 qpair failed and we were unable to recover it. 00:28:33.998 [2024-11-26 19:29:56.860877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.998 [2024-11-26 19:29:56.860908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.998 qpair failed and we were unable to recover it. 00:28:33.998 [2024-11-26 19:29:56.861036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.998 [2024-11-26 19:29:56.861066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.998 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.861256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.861292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.861553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.861584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.861783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.861815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.861997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.862029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.862167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.862198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.862307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.862338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.862578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.862610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.862735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.862767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.862887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.862917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.863152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.863184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.863305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.863335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.863587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.863617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.863816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.863848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:33.999 qpair failed and we were unable to recover it. 00:28:33.999 [2024-11-26 19:29:56.863978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.999 [2024-11-26 19:29:56.864009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.864201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.864234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.864443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.864473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.864734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.864767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.864959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.864990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.865091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.865122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.865296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.865327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.865500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.865532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.865636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.865668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.865895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.865927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.866041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.866074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.866197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.866227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.866406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.866438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.000 qpair failed and we were unable to recover it. 00:28:34.000 [2024-11-26 19:29:56.866683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.000 [2024-11-26 19:29:56.866730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.866946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.866977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.867163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.867194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.867381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.867413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.867688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.867721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.867960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.867992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.868114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.868145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.868335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.868366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.868535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.868565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.868747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.868780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.869017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.869048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.869152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.869183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.869364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.869395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.869574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.869605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.869746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.869784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.869955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.869987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.870223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.870254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.870490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.870522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.870630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.870662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.870857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.870889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.871002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.871033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.871138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.871168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.871290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.871322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.871497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.871528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.871696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.871730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.871906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.871937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.872071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.872102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.872286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.872317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.872509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.872542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.872778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.872812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.872926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.872957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.873158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.873190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.873363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.873395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.001 [2024-11-26 19:29:56.873595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.001 [2024-11-26 19:29:56.873627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.001 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.873840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.873873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.874042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.874072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.874173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.874205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.874367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.874399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.874659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.874701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.874963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.874994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.875175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.875207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.875335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.875367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.875562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.875595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.875769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.875802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.875991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.876023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.876198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.876230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.876352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.876383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.876667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.002 [2024-11-26 19:29:56.876711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.002 qpair failed and we were unable to recover it. 00:28:34.002 [2024-11-26 19:29:56.876883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.876914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.877023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.877054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.877230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.877262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.877391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.877423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.877632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.877662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.877864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.877895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.878011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.878048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.878252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.878283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.878477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.878508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.878617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.878650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.878841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.878872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.879039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.879071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.879245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.879278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.879386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.879417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.879687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.879721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.879915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.879946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.880132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.880163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.880333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.880364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.880622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.003 [2024-11-26 19:29:56.880653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.003 qpair failed and we were unable to recover it. 00:28:34.003 [2024-11-26 19:29:56.880787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.880819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.881011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.881041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.881222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.881254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.881441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.881472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.881645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.881687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.881970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.882002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.882110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.882140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.882267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.882298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.882411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.882443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.882632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.882663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.882782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.882814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.882987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.883017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.883120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.883152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.883269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.883300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.883501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.883533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.883703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.883735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.883968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.004 [2024-11-26 19:29:56.883999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.004 qpair failed and we were unable to recover it. 00:28:34.004 [2024-11-26 19:29:56.884233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.884265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.884371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.884402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.884529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.884560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.884821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.884854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.885034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.885064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.885321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.885353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.885537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.885567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.885714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.885748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.885942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.885974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.886081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.886112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.886315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.886351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.886614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.886646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.886825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.886856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.005 [2024-11-26 19:29:56.886979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.005 [2024-11-26 19:29:56.887010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.005 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.887144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.887176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.887413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.887444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.887642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.887683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.887872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.887905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.888150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.888180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.888403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.888435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.888561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.888591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.888894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.888927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.889122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.889153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.889405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.889437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.006 [2024-11-26 19:29:56.889561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.006 [2024-11-26 19:29:56.889592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.006 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.889709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.889742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.889849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.889879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.890016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.890048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.890159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.890190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.890386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.890417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.890529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.890561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.890798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.890831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.891066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.891097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.891213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.891245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.891480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.891511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.891621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.891652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.891846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.891878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.892071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.892103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.892311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.892342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.892579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.007 [2024-11-26 19:29:56.892610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.007 qpair failed and we were unable to recover it. 00:28:34.007 [2024-11-26 19:29:56.892812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.892846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.892963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.892994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.893167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.893199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.893326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.893358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.893467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.893498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.893616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.893648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.893773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.893804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.893998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.894029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.894290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.894321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.894437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.894468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.894655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.894701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.894894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.894926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.008 [2024-11-26 19:29:56.895100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.008 [2024-11-26 19:29:56.895130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.008 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.895241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.895272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.895461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.895491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.895656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.895697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.895885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.895917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.896040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.896071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.896197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.896227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.896396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.896427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.896600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.896631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.896894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.896925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.897196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.897227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.897443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.897474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.009 [2024-11-26 19:29:56.897744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.009 [2024-11-26 19:29:56.897778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.009 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.897969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.898001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.898115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.898146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.898257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.898288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.898471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.898502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.898691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.898724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.898983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.899015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.899148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.899179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.899404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.899436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.899685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.899717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.899908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.899939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.010 qpair failed and we were unable to recover it. 00:28:34.010 [2024-11-26 19:29:56.900185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.010 [2024-11-26 19:29:56.900217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.900401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.900432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.900618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.900650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.900939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.900971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.901099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.901130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.901302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.901334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.901440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.901471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.901646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.901701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.901812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.901844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.901978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.902010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.011 qpair failed and we were unable to recover it. 00:28:34.011 [2024-11-26 19:29:56.902123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.011 [2024-11-26 19:29:56.902155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.012 qpair failed and we were unable to recover it. 00:28:34.012 [2024-11-26 19:29:56.902327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.012 [2024-11-26 19:29:56.902358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.012 qpair failed and we were unable to recover it. 00:28:34.012 [2024-11-26 19:29:56.902544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.012 [2024-11-26 19:29:56.902576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.012 qpair failed and we were unable to recover it. 00:28:34.012 [2024-11-26 19:29:56.902748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.012 [2024-11-26 19:29:56.902782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.012 qpair failed and we were unable to recover it. 00:28:34.012 [2024-11-26 19:29:56.902886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.012 [2024-11-26 19:29:56.902917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.012 qpair failed and we were unable to recover it. 00:28:34.012 [2024-11-26 19:29:56.903030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.012 [2024-11-26 19:29:56.903067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.012 qpair failed and we were unable to recover it. 00:28:34.012 [2024-11-26 19:29:56.903192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.012 [2024-11-26 19:29:56.903223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.012 qpair failed and we were unable to recover it. 00:28:34.012 [2024-11-26 19:29:56.903433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.012 [2024-11-26 19:29:56.903465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.012 qpair failed and we were unable to recover it. 00:28:34.012 [2024-11-26 19:29:56.903634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.013 [2024-11-26 19:29:56.903665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.013 qpair failed and we were unable to recover it. 00:28:34.013 [2024-11-26 19:29:56.903855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.013 [2024-11-26 19:29:56.903886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.013 qpair failed and we were unable to recover it. 00:28:34.013 [2024-11-26 19:29:56.904069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.013 [2024-11-26 19:29:56.904100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.013 qpair failed and we were unable to recover it. 00:28:34.013 [2024-11-26 19:29:56.904211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.013 [2024-11-26 19:29:56.904242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.013 qpair failed and we were unable to recover it. 00:28:34.013 [2024-11-26 19:29:56.904516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.013 [2024-11-26 19:29:56.904547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.013 qpair failed and we were unable to recover it. 00:28:34.013 [2024-11-26 19:29:56.904809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.013 [2024-11-26 19:29:56.904842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.013 qpair failed and we were unable to recover it. 00:28:34.013 [2024-11-26 19:29:56.905071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.013 [2024-11-26 19:29:56.905101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.013 qpair failed and we were unable to recover it. 00:28:34.013 [2024-11-26 19:29:56.905284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.013 [2024-11-26 19:29:56.905315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.013 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.905596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.905627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.905836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.905868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.906059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.906089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.906359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.906391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.906513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.906544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.906728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.906761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.906888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.906919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.907099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.907130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.014 [2024-11-26 19:29:56.907236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.014 [2024-11-26 19:29:56.907266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.014 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.907393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.907424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.907627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.907658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.907845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.907875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.907998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.908029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.908219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.908250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.908418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.908448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.908653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.908693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.908878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.908910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.909083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.909113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.909317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.909348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.909531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.909562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.909736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.909768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.909886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.909918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.910175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.910206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.910341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.910371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.910550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.910581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.910774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.015 [2024-11-26 19:29:56.910806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.015 qpair failed and we were unable to recover it. 00:28:34.015 [2024-11-26 19:29:56.911043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.911075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.911196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.911226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.911408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.911439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.911560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.911597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.911874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.911906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.912077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.912108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.912315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.912346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.912531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.912562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.912826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.912858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.912985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.913016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.913254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.913286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.913400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.913430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.913627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.913659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.913775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.913807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.913929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.913960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.914149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.914181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.914313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.914344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.914463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.914495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.914734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.914768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.914883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.914914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.915165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.915196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.915434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.915465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.915634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.915665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.915867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.915899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.916020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.916051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.916307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.916338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.916523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.916555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.916734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.916767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.916874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.916905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.917076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.917107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.917325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.917356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.917484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.917515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.917724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.917757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.917945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.917977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.918147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.918180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.918367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.918399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.918529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.918560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.918739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.918772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.918966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.918997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.919180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.919212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.919332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.919363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.919533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.919564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.919704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.919741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.919984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.920020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.920144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.920175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.016 [2024-11-26 19:29:56.920288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.016 [2024-11-26 19:29:56.920319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.016 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.920516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.920547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.920677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.920710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.920895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.920927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.921045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.921076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.921221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.921253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.921501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.921532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.921730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.921761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.921944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.921975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.922246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.922276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.922534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.922565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.922739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.922773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.923018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.923049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.923220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.923250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.923494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.923525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.923695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.923727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.923912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.923943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.924177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.924209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.924335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.924367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.924539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.924570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.924867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.924899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.925069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.925100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.925267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.925298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.925400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.925431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.925680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.925712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.925936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.926008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.926206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.926277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.926486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.926521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.926709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.926743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.926879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.926911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.927105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.927136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.927330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.927362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.927483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.927514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.017 qpair failed and we were unable to recover it. 00:28:34.017 [2024-11-26 19:29:56.927716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.017 [2024-11-26 19:29:56.927749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.927867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.927897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.928135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.928167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.928348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.928379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.928567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.928599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.928717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.928758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.928945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.928976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.929146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.929177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.929432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.929463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.929654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.929698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.929891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.929923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.930037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.930069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.930204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.930235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.930412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.930445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.930692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.930725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.930854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.930885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.931021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.931054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.931315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.931345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.931462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.931493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.931694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.931730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.931899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.931930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.932046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.932078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.932262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.932293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.932482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.932513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.932696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.932729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.932968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.932999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.933119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.933150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.933348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.933380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.933583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.933614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.933742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.933775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.933893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.933925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.934041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.934071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.934360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.934429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.934636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.934685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.934828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.934862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.934998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.935029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.935244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.935275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.935398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.935431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.935616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.935647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.935829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.935860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.935973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.936005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.936244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.936275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.936520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.936551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.018 [2024-11-26 19:29:56.936826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.018 [2024-11-26 19:29:56.936858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.018 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.936965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.936997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.937137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.937179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.937295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.937326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.937513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.937544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.937809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.937843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.938085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.938116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.938299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.938330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.938457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.938489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.938691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.938728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.938934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.939109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.939141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.939270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.939301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.939512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.939544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.939744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.939776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.939888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.939919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.940127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.940159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.940271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.940303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.940409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.940441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.940565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.940598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.940891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.940924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.941105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.941138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.941353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.941385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.941554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.941585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.941693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.941727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.941992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.942025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.019 [2024-11-26 19:29:56.942142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.019 [2024-11-26 19:29:56.942175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.019 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.942414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.942446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.942576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.942607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.942838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.942872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.942986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.943018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.943254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.943286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.943393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.943425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.943614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.943645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.943761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.943794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.943911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.943942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.944132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.944164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.944287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.944318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.944433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.944465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.944585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.944617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.944759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.944791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.020 [2024-11-26 19:29:56.944911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.020 [2024-11-26 19:29:56.944944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.020 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.945122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.945159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.945272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.945304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.945564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.945595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.945855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.945889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.946089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.946120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.946289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.946321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.946511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.946543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.946757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.946791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.946921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.021 [2024-11-26 19:29:56.946952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.021 qpair failed and we were unable to recover it. 00:28:34.021 [2024-11-26 19:29:56.947136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.947169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.947361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.947392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.947571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.947604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.947864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.947897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.948083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.948116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.948234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.948265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.948394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.948426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.948608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.948640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.948782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.948815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.948995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.949028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.949197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.949229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.949368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.949401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.949581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.949613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.949818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.949850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.950054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.950087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.950206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.950238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.950415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.950447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.950653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.950697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.950893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.022 [2024-11-26 19:29:56.950933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.022 qpair failed and we were unable to recover it. 00:28:34.022 [2024-11-26 19:29:56.951069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.951101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.951273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.951311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.951428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.951460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.951658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.951702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.951946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.951979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.952117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.952149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.952266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.952298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.952431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.952463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.952636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.952668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.952797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.952829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.952932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.952965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.953157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.953189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.953296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.953328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.953508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.953542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.953779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.953812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.953999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.954031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.954238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.954271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.954542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.954574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.954712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.954745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.954919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.954951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.955147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.955187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.955363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.955396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.955569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.955602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.955793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.955826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.955966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.956000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.956201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.956233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.956524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.956556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.956800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.956841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.957012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.957044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.957291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.957323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.957459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.957491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.957625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.957657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.957885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.957918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.958102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.958134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.958313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.958346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.958455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.958486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.958612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.958644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.958831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.958862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.959095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.959127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.959302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.959339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.959471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.959507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.959750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.959783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.959967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.959998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.960113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.960145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.960408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.960440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.960558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.960590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.960832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.960866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.960981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.961012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.961138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.961170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.961364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.961396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.961583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.961616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.961903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.961936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.962122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.962159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.962452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.962485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.962688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.962723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.962984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.963016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.963197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.963230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.963400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.023 [2024-11-26 19:29:56.963431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.023 qpair failed and we were unable to recover it. 00:28:34.023 [2024-11-26 19:29:56.963640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.963681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.963825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.963856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.964063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.964095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.964274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.964306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.964485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.964517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.964708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.964742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.964924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.964956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.965077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.965109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.965234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.965266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.965516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.965549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.965678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.965712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.965949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.965980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.966165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.966201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.966324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.966357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.966529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.966561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.966801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.966835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.967008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.967040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.967228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.967260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.967449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.967484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.967666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.967708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.967911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.967943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.968057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.968096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.968285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.968318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.968581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.968612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.968864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.968898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.969072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.969105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.969298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.969330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.969542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.969574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.969720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.969754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.969951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.969984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.970221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.970254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.970446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.970477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.970603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.970634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.970864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.970901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.971078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.971110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.971303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.971335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.971510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.971542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.971681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.971715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.971884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.971916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.972025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.972057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.972246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.972279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.972408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.972440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.972712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.972746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.972854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.972885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.973146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.973179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.973364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.973396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.973522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.973553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.973726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.973760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.973949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.973983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.974103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.974134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.974308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.974339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.974527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.974559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.974733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.974765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.975055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.975087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.975204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.975236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.975416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.975448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.975707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.975740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.975879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.975911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.976042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.976074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.976212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.976244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.976355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.976386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.976511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.976549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.976742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.976777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.977017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.977049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.977286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.977318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.977490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.977522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.977645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.977698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.977830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.977862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.978040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.978072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.024 [2024-11-26 19:29:56.978205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.024 [2024-11-26 19:29:56.978236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.024 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.978365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.978398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.978548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.978579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.978777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.978812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.979045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.979076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.979259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.979291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.979421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.979454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.979642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.979683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.979852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.979885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.980073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.980105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.980273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.980305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.980414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.980447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.980687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.980721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.980826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.980858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.981043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.981075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.981277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.981310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.981546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.981578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.981711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.981745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.981938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.981970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.982149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.982181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.982310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.982342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.982475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.982506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.982706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.982740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.982860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.982893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.983078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.983111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.983279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.983312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.983485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.983516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.983707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.983741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.983910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.983943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.984081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.984112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.984229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.984261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.984366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.984399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.984588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.984627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.984756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.984790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.984915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.984947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.985118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.985150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.985340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.985372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.985633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.985665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.985844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.985875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.986005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.986036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.986207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.986239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.986370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.986402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.986613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.986645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.986847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.986880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.987021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.987053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.987239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.987271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.987408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.987440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.987620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.987653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.987904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.987937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.988064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.988095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.988277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.988308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.988486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.988518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.988713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.988746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.988862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.988894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.989005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.989037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.989208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.989240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.989494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.989525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.989765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.989801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.989921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.989953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.990072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.990104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.990306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.990338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.990596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.990628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.990767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.990800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.991042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.991074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.991301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.991333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.991513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.991545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.991732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.991765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.991972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.992004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.992185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.025 [2024-11-26 19:29:56.992216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.025 qpair failed and we were unable to recover it. 00:28:34.025 [2024-11-26 19:29:56.992347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.992379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.992564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.992596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.992717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.992751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.992987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.993025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.993264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.993296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.993487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.993519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.993647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.993696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.993880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.993912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.994029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.994062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.994252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.994284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.994455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.994486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.994680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.994713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.994925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.994957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.995156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.995189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.995314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.995347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.995457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.995488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.995691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.995727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.995916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.995948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.996075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.996106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.996221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.996253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.996370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.996401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.996616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.996648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.996774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.996805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.996994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.997026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.997140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.997171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.997412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.997444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.997653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.997697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.997899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.997930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.998119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.998151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.998333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.998364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.998494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.998526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.998647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.998688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.998857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.998888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.999067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.999098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.999224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.999255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.999362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.999393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.999633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.999664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:56.999863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:56.999894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.000021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.000053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.000303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.000335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.000550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.000581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.000768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.000801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.000940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.000972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.001213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.001250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.001423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.001455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.001663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.001723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.001838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.001870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.002053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.002084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.002252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.002283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.002418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.002449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.002567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.002598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.002740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.002774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.003009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.003039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.003156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.003188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.003288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.003320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.003449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.003480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.003614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.003646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.003784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.003817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.003997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.004028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.004156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.004188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.004377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.004409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.004598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.004629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.004757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.004791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.005049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.005082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.005197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.005228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.005423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.005454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.005638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.026 [2024-11-26 19:29:57.005680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.026 qpair failed and we were unable to recover it. 00:28:34.026 [2024-11-26 19:29:57.005949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.005980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.006081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.006112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.006316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.006347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.006538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.006569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.006697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.006730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.006995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.007026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.007236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.007267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.007452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.007484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.007658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.007698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.007814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.007845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.008032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.008064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.008237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.008268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.008471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.008502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.008712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.008746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.008871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.008902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.009119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.009150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.009331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.009369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.009541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.009572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.009709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.009742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.009936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.009968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.010148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.010180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.010288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.010320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.010427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.010458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.010768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.010801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.010979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.011011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.011192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.011224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.011407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.011439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.011641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.011679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.011854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.011885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.011998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.012030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.012284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.012317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.012506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.012538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.012729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.012761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.013008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.013039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.013146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.013178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.013306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.013338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.013516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.013547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.013727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.013760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.013951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.013981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.014249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.014282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.014449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.014480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.014663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.014708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.014880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.014912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.015163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.015195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.015346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.015540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.015571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.015746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.015779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.015954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.015986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.016208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.016240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.016357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.016388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.016505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.016536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.016727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.016761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.017001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.017034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.017205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.017237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.017361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.017393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.017613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.017645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.017846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.017885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.018069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.018102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.018279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.018311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.018449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.018481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.018609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.018641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.018771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.018803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.018990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.019022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.019214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.019245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.019352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.019383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.019561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.019593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.019702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.019737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.019906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.019945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.020156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.020187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.027 qpair failed and we were unable to recover it. 00:28:34.027 [2024-11-26 19:29:57.020354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.027 [2024-11-26 19:29:57.020386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.020516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.020549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.020742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.020775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.021020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.021054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.021274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.021305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.021470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.021502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.021706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.021739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.021984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.022016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.022201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.022236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.022354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.022386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.022498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.022530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.022700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.022733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.022839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.022870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.023043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.023074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.023231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.023302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.023443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.023478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.023730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.023766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.023959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.023991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.024190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.024223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.024351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.024381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.024554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.024585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.024709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.024743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.024955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.024987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.025164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.025194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.025406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.025438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.025617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.025648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.025866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.025897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.026017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.026047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.026300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.026331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.026528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.026560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.026736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.026771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.026943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.026975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.027087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.027118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.027241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.027273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.027540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.027572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.027751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.027785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.027904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.027935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.028057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.028088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.028204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.028236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.028428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.028460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.028666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.028708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.028883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.028920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.029099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.029129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.029305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.029336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.029520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.029552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.029785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.029819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.029991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.030022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.030203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.030234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.030489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.030521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.030692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.030725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.030936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.030966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.031100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.031134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.031304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.031336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.031504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.031537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.031776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.031809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.032034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.032068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.032324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.032355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.032540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.032572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.032810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.032842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.033060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.033091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.033215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.033246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.033512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.033544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.033747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.033780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.034004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.034035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.028 qpair failed and we were unable to recover it. 00:28:34.028 [2024-11-26 19:29:57.034253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.028 [2024-11-26 19:29:57.034284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.034418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.034449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.034645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.034700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.034887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.034918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.035120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.035157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.035394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.035426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.035593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.035624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.035831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.035862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.036030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.036062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.036166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.036198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.029 [2024-11-26 19:29:57.036396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.029 [2024-11-26 19:29:57.036426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.029 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.036622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.036654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.036857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.036888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.037002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.037033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.037181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.037212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.037335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.037365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.037543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.037574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.037691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.037723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.037979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.038010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.038271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.038304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.038427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.038457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.038565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.038597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.038736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.038768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.038890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.038920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.039156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.039187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.039372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.039403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.039581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.039613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.039743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.039775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.039949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.039979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.040149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.040180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.040370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.040401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.040600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.040637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.040842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.040874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.041043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.041074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.041203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.041234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.041422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.041453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.041712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.041747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.041934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.041966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.042095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.042125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.042295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.042325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.042425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.042458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.042652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.042695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.042958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.042990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.043245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.043276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.327 [2024-11-26 19:29:57.043397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.327 [2024-11-26 19:29:57.043428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.327 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.043619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.043650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.043774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.043805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.043978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.044008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.044191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.044224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.044353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.044383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.044503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.044535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.044652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.044690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.044879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.044909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.045083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.045114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.045394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.045426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.045539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.045570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.045697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.045731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.045997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.046028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.046215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.046245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.046507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.046540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.046662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.046716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.046857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.046889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.047063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.047094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.047274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.047304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.047515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.047545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.047758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.047790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.047917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.047947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.048193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.048224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.048390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.048421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.048542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.048573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.048689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.048721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.048929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.048961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.049073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.049104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.049291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.049320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.049442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.049472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.049603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.049633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.049758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.049790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.049912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.049944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.050139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.050170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.050405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.050434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.050604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.050634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.050762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.050794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.050966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.328 [2024-11-26 19:29:57.050998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.328 qpair failed and we were unable to recover it. 00:28:34.328 [2024-11-26 19:29:57.051194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.051225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.051349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.051380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.051647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.051691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.051869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.051899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.052019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.052049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.052150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.052178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.052298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.052330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.052440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.052471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.052601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.052631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.052754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.052786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.052900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.052930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.053066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.053098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.053262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.053293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.053504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.053535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.053637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.053698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.053945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.053977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.054161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.054199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.054297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.054329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.054531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.054562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.054680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.054712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.054886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.054917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.055031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.055062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.055187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.055218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.055349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.055379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.055498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.055528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.055639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.055682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.055876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.055909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.056083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.056114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.056309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.056341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.056476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.056508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.056693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.056726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.056901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.056933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.057041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.057073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.057242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.057272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.057546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.057577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.057774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.057808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.057944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.057976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.329 [2024-11-26 19:29:57.058152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.329 [2024-11-26 19:29:57.058183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.329 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.058354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.058385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.058566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.058597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.058779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.058812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.058932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.058963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.059076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.059108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.059209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.059247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.059413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.059444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.059613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.059644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.059767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.059799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.059924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.059956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.060059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.060091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.060199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.060230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.060352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.060384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.060506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.060537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.060704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.060736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.060860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.060891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.061009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.061040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.061161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.061194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.061388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.061419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.061534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.061567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.061713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.061748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.061925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.061957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.062258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.062289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.062460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.062490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.062688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.062722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.062844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.062874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.063148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.063180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.063348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.063379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.063633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.063663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.063863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.063895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.064010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.064042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.064163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.064195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.064435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.064467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.064592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.064623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.064806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.064840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.065012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.065044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.065153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.065184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.330 qpair failed and we were unable to recover it. 00:28:34.330 [2024-11-26 19:29:57.065327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.330 [2024-11-26 19:29:57.065359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.065547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.065578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.065691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.065728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.065912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.065943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.066133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.066165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.066277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.066308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.066437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.066471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.066593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.066625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.066817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.066849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.067026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.067058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.067193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.067225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.067343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.067377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.067553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.067584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.067695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.067727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.067869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.067901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.068102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.068134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.068253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.068284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.068545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.068577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.068711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.068744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.068991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.069023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.069139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.069169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.069404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.069435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.069544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.069576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.069710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.069743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.070023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.070056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.070227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.070258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.070447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.070479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.070661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.070710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.070829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.070859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.071046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.071078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.071196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.071227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.071357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.071388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.071568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.331 [2024-11-26 19:29:57.071600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.331 qpair failed and we were unable to recover it. 00:28:34.331 [2024-11-26 19:29:57.071812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.071844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.072030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.072061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.072255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.072286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.072498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.072535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.072655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.072704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.072898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.072929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.073112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.073144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.073314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.073346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.073471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.073502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.073620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.073650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.073849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.073882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.073988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.074019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.074188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.074218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.074433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.074465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.074753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.074787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.074976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.075007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.075179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.075211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.075345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.075377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.075489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.075520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.075758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.075792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.076032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.076064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.076183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.076215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.076395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.076426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.076713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.076745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.076874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.076904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.077038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.077069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.077257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.077288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.077486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.077517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.077629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.077659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.077873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.077904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.078014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.078049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.078154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.078184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.078307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.078337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.078525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.078557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.078686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.078718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.078822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.078853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.078984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.079014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.079121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.079151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.079257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.332 [2024-11-26 19:29:57.079289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.332 qpair failed and we were unable to recover it. 00:28:34.332 [2024-11-26 19:29:57.079569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.079600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.079794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.079826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.080000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.080032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.080211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.080241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.080354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.080384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.080573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.080606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.080740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.080771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.080888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.080917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.081088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.081119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.081304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.081335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.081501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.081531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.081697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.081728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.081913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.081945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.082139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.082170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.082353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.082384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.082558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.082588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.082769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.082800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.082971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.083002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.083181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.083224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.083345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.083376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.083591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.083621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.083882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.083915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.084099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.084130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.084300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.084329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.084516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.084547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.084735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.084767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.084946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.084978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.085091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.085122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.085327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.085357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.085466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.085497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.085598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.085629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.085908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.085941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.086164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.086236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.086487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.086558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.086881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.086952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.087255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.087292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.087425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.087457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.087695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.087730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.333 qpair failed and we were unable to recover it. 00:28:34.333 [2024-11-26 19:29:57.087905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.333 [2024-11-26 19:29:57.087936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.088061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.088093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.088276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.088308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.088431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.088462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.088659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.088704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.088913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.088945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.089069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.089100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.089210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.089250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.089376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.089407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.089590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.089622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.089817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.089849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.090043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.090074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.090192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.090223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.090348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.090378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.090570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.090602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.090727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.090760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.090947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.090979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.091091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.091121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.091310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.091341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.091522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.091554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.091734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.091767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.091902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.091934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.092038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.092070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.092243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.092274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.092392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.092424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.092595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.092627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.092869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.092903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.093091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.093123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.093239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.093269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.093444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.093476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.093647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.093686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.093900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.093930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.094175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.094207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.094391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.094421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.094578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.094649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.094864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.094900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.095029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.095061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.095181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.334 [2024-11-26 19:29:57.095213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.334 qpair failed and we were unable to recover it. 00:28:34.334 [2024-11-26 19:29:57.095424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.095456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.095719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.095759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.095951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.095984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.096108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.096139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.096340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.096372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.096483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.096515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.096688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.096721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.096929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.096962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.097149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.097182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.097392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.097433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.097683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.097717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.097838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.097869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.098051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.098084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.098211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.098244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.098368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.098401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.098524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.098557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.098733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.098767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.098946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.098978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.099154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.099187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.099319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.099352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.099477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.099510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.099691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.099725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.099842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.099874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.100065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.100097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.100284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.100317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.100437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.100469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.100642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.100682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.100927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.100960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.101149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.101181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.101355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.101387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.101499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.101531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.101638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.101677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.101876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.101909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.102148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.102180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.102387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.102419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.102549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.102581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.102775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.102810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.102925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.102957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.335 qpair failed and we were unable to recover it. 00:28:34.335 [2024-11-26 19:29:57.103135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.335 [2024-11-26 19:29:57.103167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.103366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.103399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.103516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.103548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.103685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.103720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.103894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.103928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.104116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.104148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.104381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.104413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.104553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.104585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.104822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.104857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.105042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.105074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.105316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.105348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.105587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.105625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.105764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.105797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.105926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.105959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.106084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.106116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.106361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.106394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.106519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.106551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.106679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.106712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.106904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.106936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.107117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.107149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.107321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.107353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.107537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.107569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.107691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.107726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.107905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.107937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.108057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.108091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.108217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.108249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.108435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.108468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.108603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.108635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.108815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.108849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.109113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.109146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.109340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.109372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.109547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.109578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.109767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.109801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.109917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.336 [2024-11-26 19:29:57.109949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.336 qpair failed and we were unable to recover it. 00:28:34.336 [2024-11-26 19:29:57.110190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.110222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.110416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.110448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.110629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.110661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.110778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.110810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.110998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.111031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.111201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.111233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.111403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.111434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.111633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.111666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.111808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.111842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.111952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.111984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.112179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.112211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.112395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.112428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.112677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.112711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.112883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.112915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.113187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.113219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.113349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.113382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.113502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.113535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.113712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.113752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.114000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.114032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.114270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.114301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.114482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.114515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.114738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.114771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.114964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.114997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.115184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.115216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.115489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.115521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.115757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.115792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.116027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.116058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.116229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.116261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.116458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.116491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.116611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.116643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.116911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.116943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.117187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.117219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.117346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.117379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.117553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.117585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.117826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.117861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.117987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.118019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.118132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.118164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.118402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.118435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.337 qpair failed and we were unable to recover it. 00:28:34.337 [2024-11-26 19:29:57.118613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.337 [2024-11-26 19:29:57.118646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.118826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.118858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.119043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.119075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.119257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.119289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.119533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.119565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.119749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.119782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.119992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.120024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.120243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.120276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.120465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.120497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.120616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.120648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.120829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.120861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.121031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.121063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.121167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.121200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.121370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.121403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.121638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.121680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.121876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.121907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.122032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.122064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.122304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.122337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.122522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.122555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.122713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.122753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.123014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.123045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.123220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.123252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.123420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.123452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.123629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.123661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.123852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.123885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.124061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.124093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.124273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.124305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.124471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.124503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.124706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.124739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.124908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.124940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.125056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.125088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.125221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.125252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.125359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.125391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.125657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.125698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.125951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.125983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.126250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.126282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.126466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.126497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.126627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.338 [2024-11-26 19:29:57.126658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.338 qpair failed and we were unable to recover it. 00:28:34.338 [2024-11-26 19:29:57.126870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.126902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.127142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.127175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.127356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.127388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.127561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.127593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.127801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.127834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.128020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.128052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.128236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.128268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.128456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.128487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.128614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.128647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.128784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.128816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.128941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.128973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.129144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.129176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.129300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.129332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.129543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.129575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.129811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.129845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.129948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.129979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.130111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.130143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.130319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.130352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.130515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.130547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.130735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.130768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.130879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.130910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.131048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.131086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.131262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.131294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.131464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.131496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.131613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.131646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.131922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.131953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.132214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.132246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.132361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.132394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.132608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.132639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.132837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.132870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.133060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.133092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.133280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.133313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.133484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.133516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.133692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.133725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.133900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.133932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.134122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.134154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.134282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.134314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.134543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.134574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.339 qpair failed and we were unable to recover it. 00:28:34.339 [2024-11-26 19:29:57.134742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.339 [2024-11-26 19:29:57.134776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.134911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.134944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.135181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.135212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.135317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.135349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.135450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.135481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.135655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.135708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.135877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.135909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.136041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.136073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.136179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.136210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.136323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.136355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.136522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.136560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.136833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.136866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.137058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.137089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.137349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.137382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.137594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.137626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.137842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.137875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.137999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.138031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.138136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.138167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.138300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.138332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.138515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.138547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.138732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.138766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.138888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.138920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.139045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.139076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.139252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.139283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.139463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.139495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.139699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.139732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.139925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.139956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.140165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.140196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.140382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.140414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.140619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.140651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.140776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.140808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.141081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.141113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.141334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.141365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.141486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.340 [2024-11-26 19:29:57.141517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.340 qpair failed and we were unable to recover it. 00:28:34.340 [2024-11-26 19:29:57.141720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.141752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.141891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.141923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.142131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.142162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.142351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.142383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.142560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.142592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.142859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.142892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.143154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.143186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.143312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.143344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.143515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.143547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.143798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.143831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.143933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.143965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.144078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.144110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.144386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.144419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.144625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.144657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.144799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.144832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.144954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.144986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.145161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.145203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.145310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.145342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.145548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.145580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.145688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.145721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.145923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.145955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.146087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.146119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.146230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.146262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.146429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.146461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.146564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.146597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.146726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.146759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.146876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.146908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.147109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.147141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.147255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.147287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.147385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.147416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.147544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.147576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.147812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.147845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.147979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.148011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.148195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.148227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.148397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.148429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.148640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.148680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.148857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.148890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.149075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.149107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.341 qpair failed and we were unable to recover it. 00:28:34.341 [2024-11-26 19:29:57.149393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.341 [2024-11-26 19:29:57.149425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.149605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.149637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.149818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.149851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.149983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.150015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.150196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.150228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.150437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.150469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.150593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.150625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.150817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.150850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.151040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.151072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.151246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.151278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.151386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.151417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.151541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.151573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.151707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.151740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.151910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.151941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.152136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.152167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.152300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.152332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.152565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.152597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.152729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.152762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.152898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.152936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.153179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.153210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.153399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.153429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.153599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.153631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.153748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.153780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.153958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.153990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.154102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.154134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.154242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.154273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.154453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.154484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.154697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.154730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.154833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.154865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.155105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.155136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.155401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.155433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.155647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.155700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.155896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.155928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.156172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.156204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.156411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.156443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.156612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.156644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.156826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.156858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.156970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.157001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.342 [2024-11-26 19:29:57.157184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.342 [2024-11-26 19:29:57.157216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.342 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.157400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.157432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.157563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.157594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.157821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.157855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.158092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.158124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.158301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.158332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.158465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.158497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.158697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.158738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.158999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.159031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.159165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.159196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.159302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.159333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.159451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.159482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.159656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.159696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.159864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.159895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.160033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.160065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.160185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.160218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.160408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.160439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.160544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.160575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.160765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.160798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.160918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.160949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.161212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.161249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.161444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.161476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.161661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.161705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.161897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.161929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.162143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.162174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.162372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.162404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.162500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.162531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.162734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.162767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.162958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.162990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.163164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.163195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.163372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.163404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.163541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.163572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.163762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.163795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.163922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.163953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.164131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.164163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.164398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.164430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.164608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.164640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.164837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.164870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.165063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.343 [2024-11-26 19:29:57.165095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.343 qpair failed and we were unable to recover it. 00:28:34.343 [2024-11-26 19:29:57.165216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.165247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.165484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.165516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.165713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.165745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.165947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.165979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.166168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.166200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.166329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.166361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.166530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.166561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.166747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.166780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.166979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.167012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.167297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.167328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.167452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.167483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.167603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.167635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.167773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.167805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.167927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.167959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.168141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.168173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.168297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.168328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.168597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.168629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.168874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.168907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.169151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.169183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.169425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.169456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.169640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.169681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.169857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.169894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.170025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.170056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.170247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.170279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.170462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.170494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.170691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.170724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.170828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.170859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.170983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.171015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.171203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.171236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.171412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.171443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.171620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.171653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.171833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.171864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.172098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.172130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.172236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.172267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.172441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.172472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.172594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.172626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.172799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.172831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.172937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.344 [2024-11-26 19:29:57.172969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.344 qpair failed and we were unable to recover it. 00:28:34.344 [2024-11-26 19:29:57.173082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.173112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.173316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.173347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.173527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.173559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.173729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.173761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.173940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.173971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.174160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.174191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.174448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.174480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.174665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.174717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.174890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.174921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.175184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.175216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.175405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.175438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.175565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.175597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.175785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.175819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.175935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.175967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.176074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.176105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.176373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.176405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.176580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.176612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.176739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.176772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.176961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.176993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.177188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.177220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.177410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.177442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.177687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.177720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.177976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.178008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.178109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.178146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.178352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.178384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.178589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.178621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.178770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.178820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.179034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.179064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.179276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.179306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.179409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.179439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.179628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.179658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.179843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.179873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.179988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.180018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.345 qpair failed and we were unable to recover it. 00:28:34.345 [2024-11-26 19:29:57.180132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.345 [2024-11-26 19:29:57.180161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.180347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.180377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.180624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.180662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.180866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.180897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.181083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.181113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.181325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.181355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.181560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.181589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.181769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.181801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.181971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.182000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.182137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.182167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.182278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.182308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.182571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.182600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.182838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.182871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.183069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.183098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.183340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.183371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.183609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.183639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.183841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.183872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.184129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.184159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.184295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.184326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.184578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.184608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.184808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.184839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.185022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.185053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.185289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.185319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.185537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.185569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.185762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.185796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.185932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.185964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.186165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.186196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.186377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.186410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.186538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.186570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.186759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.186793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.186911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.186953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.187082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.187114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.187307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.187338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.187618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.187649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.187904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.187936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.188152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.188183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.188295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.188326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.188452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.188484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.346 [2024-11-26 19:29:57.188652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.346 [2024-11-26 19:29:57.188692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.346 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.188823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.188856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.189027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.189058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.189240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.189273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.189451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.189490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.189596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.189626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.189825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.189859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.190081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.190112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.190240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.190271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.190488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.190519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.190624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.190654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.190867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.190899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.191083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.191114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.191295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.191328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.191450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.191480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.191667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.191710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.191949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.191982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.192183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.192214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.192394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.192425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.192613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.192647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.192832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.192865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.193046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.193077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.193196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.193229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.193446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.193477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.193690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.193723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.193832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.193864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.193965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.193997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.194171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.194202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.194447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.194479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.194660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.194703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.194822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.194853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.195041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.195072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.195311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.195348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.195588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.195619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.195832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.195866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.196041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.196074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.196274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.196306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.196573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.196605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.347 [2024-11-26 19:29:57.196718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.347 [2024-11-26 19:29:57.196752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.347 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.196870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.196902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.197094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.197125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.197227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.197260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.197385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.197418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.197593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.197626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.197774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.197807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.197980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.198012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.198254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.198287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.198533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.198564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.198836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.198869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.198990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.199020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.199194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.199224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.199411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.199444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.199555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.199588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.199823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.199856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.200064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.200095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.200213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.200245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.200435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.200466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.200650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.200693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.200940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.200972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.201172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.201204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.201394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.201426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.201690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.201726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.201912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.201944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.202152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.202183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.202296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.202330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.202496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.202526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.202651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.202693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.202914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.202948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.203073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.203105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.203383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.203415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.203599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.203632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.203807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.203840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.203999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.204038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.204289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.204321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.204486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.204517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.204701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.204735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.204946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.348 [2024-11-26 19:29:57.204978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.348 qpair failed and we were unable to recover it. 00:28:34.348 [2024-11-26 19:29:57.205104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.205136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.205394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.205426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.205602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.205634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.205773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.205805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.205909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.205940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.206123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.206154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.206404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.206437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.206549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.206581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.206835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.206868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.207119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.207152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.207326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.207359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.207593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.207624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.207910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.207943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.208050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.208082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.208327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.208359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.208559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.208591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.208801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.208835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.209004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.209036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.209308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.209340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.209566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.209599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.209700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.209730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.210000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.210032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.210290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.210362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.210623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.210658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.210874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.210908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.211120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.211152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.211339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.211370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.211657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.211703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.211825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.211855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.212068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.212100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.212344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.212375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.212611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.212642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.212850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.212883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.349 [2024-11-26 19:29:57.213066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.349 [2024-11-26 19:29:57.213097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.349 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.213328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.213360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.213461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.213502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.213704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.213736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.213997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.214029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.214205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.214236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.214501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.214532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.214704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.214736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.214915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.214946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.215163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.215194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.215375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.215405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.215539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.215570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.215764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.215796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.216021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.216052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.216292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.216323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.216565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.216597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.216831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.216864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.217128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.217159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.217453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.217484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.217746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.217778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.218048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.218079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.218373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.218421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.218541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.218563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.218656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.218683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.218787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.218807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.219048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.219069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.219226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.219245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.219398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.219418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.219509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.219528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.219719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.219745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.219903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.219923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.220071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.220092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.220258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.220279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.220393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.220413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.220508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.220529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.220776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.220797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.220887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.220907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.221119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.221140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.221250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.350 [2024-11-26 19:29:57.221270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.350 qpair failed and we were unable to recover it. 00:28:34.350 [2024-11-26 19:29:57.221509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.221529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.221747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.221768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.221931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.221953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.222206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.222228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.222456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.222478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.222718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.222740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.222908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.222928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.223023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.223041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.223209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.223229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.223413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.223434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.223690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.223713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.223875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.223895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.224123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.224144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.224316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.224336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.224496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.224517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.224759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.224781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.224955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.224976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.225226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.225252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.225355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.225375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.225543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.225563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.225661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.225689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.225846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.225866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.227697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.227723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.227894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.227908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.227994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.228007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.228106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.228119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.228199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.228211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.228354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.228368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.228514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.228526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.228733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.228748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.228898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.228912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.229150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.229165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.229300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.229314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.229523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.229537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.229695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.229710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.229861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.229876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.229956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.229969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.230058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.230071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.351 qpair failed and we were unable to recover it. 00:28:34.351 [2024-11-26 19:29:57.230207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.351 [2024-11-26 19:29:57.230220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.230469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.230486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.230558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.230572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.230640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.230653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.230886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.230903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.231127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.231143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.231295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.231313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.231474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.231489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.231623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.231637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.231806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.231819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.231988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.232002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.232149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.232163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.232321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.232335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.232489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.232503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.232642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.232655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.232883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.232896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.233032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.233046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.233123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.233136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.233291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.233306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.235683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.235712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.235978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.235996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.236148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.236164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.236394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.236408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.236577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.236590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.236821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.236836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.236994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.237007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.237157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.237172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.237316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.237331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.237475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.237488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.237712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.237727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.237873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.237885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.237971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.237981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.238124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.238137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.238354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.238367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.238568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.238578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.238772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.238783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.238920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.238930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.239137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.239148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.239233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.352 [2024-11-26 19:29:57.239241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.352 qpair failed and we were unable to recover it. 00:28:34.352 [2024-11-26 19:29:57.239461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.239472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.239605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.239616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.239762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.239776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.240020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.240035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.240231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.240243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.240529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.240543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.240740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.240753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.240886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.240896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.241059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.241070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.241193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.241203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.241331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.241341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.241425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.241434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.241651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.241661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.241824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.241835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.242059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.242070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.242229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.242242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.242330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.242339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.242484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.242495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.242687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.242698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.242844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.242855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.243005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.243016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.243231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.243242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.243435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.243446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.243586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.243596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.243746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.243760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.243971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.243984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.244059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.244069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.244307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.244320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.244565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.244579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.244777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.244788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.244944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.244954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.245081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.245092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.245292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.245303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.247682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.247725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.247977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.247992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.248213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.248230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.248412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.248424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.248658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.248680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.248837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.353 [2024-11-26 19:29:57.248848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.353 qpair failed and we were unable to recover it. 00:28:34.353 [2024-11-26 19:29:57.248930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.248939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.249020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.249029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.249116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.249125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.249277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.249286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.249429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.249439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.249571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.249583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.249772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.249785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.249938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.249952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.250086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.250097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.250185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.250194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.250351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.250363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.250502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.250512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.250656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.250666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.250749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.250758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.250949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.250960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.251191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.251202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.251413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.251427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.251566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.251578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.251666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.251681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.251760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.251771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.251963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.251976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.252195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.252207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.252359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.252370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.252539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.252553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.252742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.252753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.252890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.252900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.253094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.253105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.253321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.253334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.253423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.253434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.253635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.253645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.253789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.253800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.253954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.253964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.254038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.254047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.254229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.254239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.354 qpair failed and we were unable to recover it. 00:28:34.354 [2024-11-26 19:29:57.254456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.354 [2024-11-26 19:29:57.254466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.254592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.254604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.254743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.254757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.254887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.254898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.255061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.255072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.255226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.255237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.255315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.255324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.255472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.255483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.255570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.255580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.255667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.255683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.255835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.255846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.256064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.256075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.256304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.256396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.256405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.256600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.256613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.256838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.256852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.256938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.256946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.257134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.257145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.257348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.257359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.259682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.259721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.259947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.259958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.260174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.260188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.260331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.260343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.260517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.260529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.260620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.260629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.260769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.260782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.260949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.260961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.261182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.261193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.261434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.261444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.261590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.261600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.261802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.261813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.261968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.261981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.262121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.262134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.262323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.262335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.262473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.262484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.262560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.262570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.262780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.262794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.263011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.263025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.355 qpair failed and we were unable to recover it. 00:28:34.355 [2024-11-26 19:29:57.263159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.355 [2024-11-26 19:29:57.263170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.263329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.263343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.263494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.263505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.263583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.263592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.263722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.263732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.263946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.263955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.264033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.264043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.264101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.264111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.264196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.264206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.264272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.264282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.264430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.264439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.264564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.264576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.264806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.264821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.264962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.264974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.265121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.265134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.265259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.265270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.265426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.265438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.265634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.265647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.265907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.265923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.266122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.266135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.266283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.266294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.266512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.266523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.266607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.266616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.266773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.266784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.266934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.266947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.267211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.267225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.267319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.267329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.267456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.267468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.267691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.267705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.267846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.267858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.268060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.268072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.268250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.268263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.268447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.268458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.268606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.268617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.268716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.268725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.268857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.268867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.269071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.269081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.271688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.271712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.356 [2024-11-26 19:29:57.271942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.356 [2024-11-26 19:29:57.271955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.356 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.272174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.272194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.272327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.272341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.272507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.272521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.272790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.272806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.272887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.272897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.272964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.272973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.273102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.273111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.273230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.273244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.273482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.273493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.273685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.273697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.273883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.273896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.274108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.274120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.274259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.274270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.274435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.274447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.274540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.274549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.274691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.274703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.274900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.274916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.275128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.275139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.275296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.275307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.275451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.275461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.275654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.275664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.275747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.275756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.275849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.275859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.276049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.276064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.276277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.276289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.276463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.276475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.276620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.276631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.276715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.276726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.276944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.276958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.277236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.277251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.277331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.277341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.277506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.277516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.277738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.277750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.277886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.277896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.278103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.278121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.278211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.278221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.278413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.278425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.278512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.278521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.357 qpair failed and we were unable to recover it. 00:28:34.357 [2024-11-26 19:29:57.278750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.357 [2024-11-26 19:29:57.278764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.278977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.278990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.279197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.279209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.279342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.279355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.279520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.279531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.279616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.279625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.279722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.279732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.279862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.279871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.280060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.280070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.280214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.280224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.280365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.280378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.280569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.280583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.280783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.280794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.280923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.280934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.281131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.281142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.281283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.281294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.281536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.281550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.281698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.281711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.281841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.281850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.282053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.282063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.282226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.282237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.282375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.282384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.283682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.283711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.283964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.283975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.284171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.284182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.284374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.284389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.284518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.284530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.284749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.284762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.284891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.284901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.285062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.285074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.285269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.285282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.285423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.285433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.285658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.285677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.285823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.285834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.285969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.285979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.358 qpair failed and we were unable to recover it. 00:28:34.358 [2024-11-26 19:29:57.286073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.358 [2024-11-26 19:29:57.286083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.286206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.286216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.286409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.286427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.286622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.286635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.286772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.286783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.287005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.287017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.287103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.287113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.287267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.287279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.287479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.287492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.287734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.287749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.288001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.288011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.288153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.288163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.288286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.288296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.288426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.288438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.288563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.288574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.291693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.291716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.291965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.291978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.292199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.292215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.292370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.292382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.292590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.292606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.292759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.292771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.292996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.293015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.293224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.293240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.293384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.293397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.293616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.293631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.293868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.293880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.294133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.294144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.294361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.294373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.294573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.294585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.294731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.294745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.294985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.294998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.295195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.295208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.295298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.295307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.295528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.295544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.295692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.295702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.295844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.295854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.295993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.296004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.296148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.296158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.359 [2024-11-26 19:29:57.296246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.359 [2024-11-26 19:29:57.296255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.359 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.296465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.296479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.296696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.296710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.296865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.296876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.297024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.297037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.297124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.297133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.297213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.297223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.297353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.297363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.297448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.297457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.297583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.297594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.297743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.297755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.297955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.297967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.298187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.298197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.298334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.298344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.298488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.298499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.298660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.298679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.298762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.298771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.298897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.298908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.299056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.299072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.299218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.299230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.299446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.299458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.299621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.299633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.299862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.299878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.300032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.300043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.300245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.300255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.300415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.300426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.300511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.300520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.300663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.300681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.300807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.300818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.301012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.301025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.301217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.301229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.301389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.301399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.301554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.301566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.301761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.301775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.302008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.302090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.302327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.302345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.302576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.302590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.302682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.302695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.302834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.360 [2024-11-26 19:29:57.302848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.360 qpair failed and we were unable to recover it. 00:28:34.360 [2024-11-26 19:29:57.303071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.303090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.303324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.303340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.303491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.303505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.303745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.303766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.304021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.304038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.304109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.304120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.304202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.304214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.304365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.304379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.304575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.304592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.304820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.304839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.305070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.305088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.305292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.305307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.305534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.305553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.305831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.305846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.306056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.306069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.306204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.306220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.306456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.306472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.306622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.306639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.306865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.306881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.307057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.307073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.307284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.307352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.307718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.307785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.308063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.308097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.308336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.308367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.308495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.308526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.308792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.308825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.309124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.309154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.309339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.309370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.309654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.309694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.309899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.309929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.310196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.310227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.310474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.310505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.310762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.310795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.311080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.311120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.311355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.311385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.311626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.311657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.311928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.311959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.312162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.312193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.361 [2024-11-26 19:29:57.312426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.361 [2024-11-26 19:29:57.312457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.361 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.312700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.312733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.312968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.312998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.313167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.313199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.313483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.313513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.313804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.313835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.314048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.314078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.314260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.314290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.314474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.314505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.314704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.314732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.314974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.314988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.315212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.315231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.315329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.315342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.315588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.315604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.315835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.315853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.315939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.315954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.316090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.316105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.316308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.316321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.316492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.316506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.316656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.316676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.316812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.316828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.317063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.317080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.317239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.317259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.317489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.317506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.317715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.317735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.317895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.317910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.318067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.318079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.318213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.318227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.318396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.318413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.318557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.318573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.318776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.318793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.319020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.319036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.319304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.319326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.319480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.319494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.319627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.319640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.319821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.319836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.320065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.320086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.320331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.320347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.320501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.362 [2024-11-26 19:29:57.320516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.362 qpair failed and we were unable to recover it. 00:28:34.362 [2024-11-26 19:29:57.320654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.320675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.320887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.320906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.321061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.321074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.321242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.321255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.321397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.321411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.321561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.321578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.321810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.321827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.321981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.321996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.322175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.322193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.322436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.322454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.322686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.322704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.322882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.322897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.323095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.323115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.323346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.323362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.323520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.323534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.323686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.323705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.323918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.323937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.324165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.324179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.324333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.324346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.324553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.324570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.324802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.324820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.325088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.325105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.325316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.325334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.325563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.325579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.325739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.325753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.325954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.325969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.326192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.326208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.326364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.326379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.326559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.326574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.326779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.326799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.327057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.327079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.327318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.327341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.327498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.327517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.363 [2024-11-26 19:29:57.327626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.363 [2024-11-26 19:29:57.327642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.363 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.327905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.327925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.328147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.328161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.328294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.328308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.328461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.328477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.328632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.328646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.328807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.328824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.328905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.328919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.329086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.329102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.329254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.329270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.329361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.329374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.329574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.329587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.329732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.329746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.329955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.329972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.330218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.330234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.330464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.330481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.330712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.330734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.330890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.330907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.331045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.331059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.331156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.331167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.331271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.331283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.331450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.331468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.331615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.331631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.331768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.331784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.331866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.331879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.332010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.332024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.332206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.332223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.332377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.332393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.332530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.332545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.332688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.332701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.332924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.332938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.333040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.333055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.333258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.333278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.333500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.333515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.333613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.333625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.333782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.333799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.333904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.333918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.334057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.334072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.334231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.364 [2024-11-26 19:29:57.334247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.364 qpair failed and we were unable to recover it. 00:28:34.364 [2024-11-26 19:29:57.334379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.334392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.334628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.334643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.334802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.334821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.335051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.335068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.335262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.335278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.335505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.335525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.335702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.335724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.335956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.335969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.336112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.336126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.336269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.336293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.336543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.336560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.336634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.336647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.336789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.336805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.337039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.337059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.337217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.337234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.337329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.337344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.337445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.337458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.337642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.337654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.337818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.337836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.337975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.337991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.338079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.338093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.338176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.338188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.338324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.338340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.338568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.338586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.338765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.338784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.339017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.339030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.339229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.339244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.339419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.339436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.339577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.339592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.339862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.339879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.340066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.340085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.340177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.340191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.340396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.340412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.340623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.340640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.340734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.340747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.340944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.340961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.341123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.341139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.341311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.365 [2024-11-26 19:29:57.341327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.365 qpair failed and we were unable to recover it. 00:28:34.365 [2024-11-26 19:29:57.341555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.341572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.341726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.341743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.341924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.341940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.342102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.342115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.342318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.342334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.342546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.342563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.342779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.342797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.343003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.343019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.343233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.343251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.343422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.343441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.343685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.343709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.343879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.343899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.343993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.344009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.344236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.344254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.344484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.344498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.344747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.344763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.344915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.344931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.345087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.345102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.345316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.345331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.345542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.345560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.345796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.345814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.346075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.346089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.346226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.346242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.346403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.346418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.346647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.346662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.346910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.346927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.347086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.347102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.347251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.347266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.347435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.347448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.347662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.347684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.347946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.347964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.348184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.348200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.348343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.348359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.348510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.348527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.348698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.348716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.348918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.348932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.349173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.349190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.349349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.349366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.349592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.349607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.349758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.349774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.350009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.350027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.350275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.366 [2024-11-26 19:29:57.350292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.366 qpair failed and we were unable to recover it. 00:28:34.366 [2024-11-26 19:29:57.350518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.350531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.350683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.350699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.350902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.350921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.351142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.351157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.351254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.351269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.351415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.351430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.351657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.351681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.351871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.351886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.352060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.352073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.352293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.352310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.352533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.352550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.352713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.352730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.352891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.352907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.353058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.353075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.353304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.353321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.353550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.353563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.353647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.353659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.353893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.353913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.354149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.354165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.354395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.354411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.354551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.354568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.354724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.354745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.354909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.354922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.355064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.355077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.355301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.355319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.355403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.355416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.355569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.355584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.355813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.355830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.356055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.356073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.356171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.356186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.356420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.356434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.356605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.356619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.356793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.356812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.356965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.356981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.357173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.357189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.357331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.357347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.357490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.357506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.357663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.367 [2024-11-26 19:29:57.357684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.367 qpair failed and we were unable to recover it. 00:28:34.367 [2024-11-26 19:29:57.357833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.357846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.358094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.358107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.358357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.358376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.358469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.358482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.358631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.358645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.358878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.358895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.359053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.359070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.359220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.359239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.359452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.359469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.359621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.359641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.359803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.359826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.359985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.360002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.360172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.360187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.360386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.360400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.360606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.360623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.360784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.360801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.361013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.361028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.361290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.361311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.361543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.361561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.361737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.361751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.361952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.361967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.362161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.362180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.362327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.362342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.362477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.362492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.362641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.362656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.362819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.362837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.362977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.362993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.363142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.363155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.363404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.363419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.363643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.363663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.363822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.363839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.364046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.364062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.364290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.364309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.364522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.364539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.364773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.364788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.365016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.365035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.365285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.365302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.365456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.365475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.368 [2024-11-26 19:29:57.365612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.368 [2024-11-26 19:29:57.365628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.368 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.365837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.365857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.366018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.366035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.366243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.366256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.366399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.366412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.366558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.366574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.366749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.366766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.366870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.366884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.367101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.367118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.367301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.367319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.367401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.367414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.367642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.367659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.367816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.367830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.367966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.367980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.368179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.368198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.368430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.368446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.368600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.368615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.368754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.368770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.368944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.368961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.369129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.369143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.369294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.369307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.369469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.369482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.369618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.369635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.369842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.369859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.370093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.370110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.370336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.370355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.370508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.370525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.370733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.370747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.370909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.370922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.371068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.371084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.371309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.371326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.371394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.371407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.371542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.371557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.371646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.371659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.371886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.371905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.372058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.372074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.372296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.372310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.372537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.372554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.372638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.372652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.372864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.372880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.373109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.373129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.373288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.373304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.369 qpair failed and we were unable to recover it. 00:28:34.369 [2024-11-26 19:29:57.373510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.369 [2024-11-26 19:29:57.373528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.373754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.373768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.374014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.374030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.374125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.374141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.374372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.374388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.374490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.374503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.374636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.374651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.374811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.374829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.374996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.375014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.375246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.375263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.375495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.375519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.375762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.375781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.375928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.375942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.376126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.376139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.376289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.376303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.376463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.376479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.376683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.376700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.376794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.376807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.377009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.377025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.377163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.377179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.377408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.377425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.377595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.377609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.377833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.377851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.378085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.378101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.378249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.378266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.378489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.378509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.378761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.378780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.378939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.378952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.379106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.379120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.379212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.379224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.379372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.379389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.379548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.379563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.379816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.379834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.380013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.380031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.380285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.380302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.380464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.380477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.380741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.380758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.380853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.380868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.380951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.380964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.381120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.381135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.381288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.381304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.381455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.370 [2024-11-26 19:29:57.381469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.370 qpair failed and we were unable to recover it. 00:28:34.370 [2024-11-26 19:29:57.381683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.381702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.381861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.381877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.382029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.382042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.382242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.382256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.382349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.382362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.382616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.382633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.382721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.382736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.382881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.382897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.383101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.383120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.383278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.383295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.383452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.383469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.383691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.383706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.383860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.383877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.384023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.384039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.384195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.384210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.384345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.384360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.384534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.384549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.384724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.384742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.384833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.384846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.384993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.385006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.385136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.385148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.385244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.385256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.385336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.385349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.385572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.385589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.385827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.385845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.385940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.385953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.386050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.386064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.386219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.386235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.386453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.386468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.386549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.386561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.386829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.386848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.387001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.387017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.387259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.387275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.387432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.387447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.387679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.387698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.387860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.387875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.388059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.388072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.388166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.388177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.388313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.388331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.388478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.388494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.388701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.388717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.371 [2024-11-26 19:29:57.388976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.371 [2024-11-26 19:29:57.388993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.371 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.389237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.389258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.389514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.389527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.389693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.389709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.389934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.389952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.390183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.390199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.390402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.390418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.390578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.390593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.390808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.390828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.390982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.391000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.391149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.391169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.391313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.391330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.391502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.391519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.391749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.391765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.392003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.392018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.392156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.392171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.392321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.392336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.392469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.392484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.392708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.392727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.392885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.392901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.393053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.393068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.393319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.393333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.393546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.393563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.393639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.393653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.393815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.393830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.393900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.393913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.394067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.394082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.394228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.394243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.394336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.394349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.394586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.372 [2024-11-26 19:29:57.394603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.372 qpair failed and we were unable to recover it. 00:28:34.372 [2024-11-26 19:29:57.394693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.394705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.394783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.394795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.395018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.395037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.395136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.395150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.395425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.395442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.395614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.395629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.395803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.395822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.395923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.395942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.396146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.396160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.396293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.396306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.396538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.396556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.396714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.396731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.396897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.396912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.397141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.397157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.397295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.397311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.397521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.397538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.397685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.397699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.397788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.397800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.397862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.397873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.398068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.398085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.398234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.398249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.398434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.398450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.398684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.398702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.398940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.398960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.399124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.399137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.399268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.399282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.399528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.399547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.399700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.373 [2024-11-26 19:29:57.399717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.373 qpair failed and we were unable to recover it. 00:28:34.373 [2024-11-26 19:29:57.399888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.374 [2024-11-26 19:29:57.399903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.374 qpair failed and we were unable to recover it. 00:28:34.374 [2024-11-26 19:29:57.399990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.374 [2024-11-26 19:29:57.400003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.374 qpair failed and we were unable to recover it. 00:28:34.374 [2024-11-26 19:29:57.400224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.374 [2024-11-26 19:29:57.400242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.374 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.400454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.400472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.400644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.400657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.400794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.400807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.400956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.400976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.401142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.401158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.401325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.401341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.401493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.401509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.401654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.401674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.401846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.401863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.402133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.402149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.402362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.402376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.402604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.402621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.402772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.402789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.402934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.402949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.403096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.403110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.403265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.403281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.403433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.403448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.680 [2024-11-26 19:29:57.403546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.680 [2024-11-26 19:29:57.403558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.680 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.403752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.403766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.403912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.403928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.404177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.404194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.404420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.404437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.404649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.404674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.404881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.404898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.405175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.405189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.405362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.405379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.405608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.405624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.405774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.405792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.405879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.405892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.406042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.406057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.406194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.406214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.406443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.406464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.406625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.406642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.406877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.406895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.407079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.407094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.407188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.407199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.407329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.407342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.407479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.407495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.407578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.407592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.407816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.407833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.407993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.408009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.408101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.408114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.408250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.408266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.408367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.408382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.408470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.408485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.681 [2024-11-26 19:29:57.408629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.681 [2024-11-26 19:29:57.408642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.681 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.408775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.408789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.408938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.408952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.409187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.409205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.409298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.409311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.409461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.409477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.409702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.409720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.409891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.409908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.410156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.410172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.410379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.410394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.410620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.410638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.410849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.410866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.411131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.411148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.411359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.411377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.411551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.411564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.411787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.411803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.412054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.412074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.412213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.412228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.412404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.412419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.412581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.412597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.412739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.412758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.412922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.412939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.413170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.413185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.413361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.413375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.413546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.413564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.413740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.413757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.414004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.414024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.414192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.414210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.682 [2024-11-26 19:29:57.414468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.682 [2024-11-26 19:29:57.414485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.682 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.414627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.414640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.414861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.414878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.415042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.415059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.415236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.415251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.415415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.415431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.415656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.415696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.415856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.415872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.416126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.416140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.416306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.416320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.416495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.416512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.416766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.416785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.416955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.416971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.417190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.417207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.417346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.417360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.417441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.417453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.417715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.417732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.417871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.417887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.418028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.418042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.418141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.418155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.418315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.418330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.418489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.418506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.418743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.418763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.418848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.418859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.418994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.419007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.419228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.419248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.419399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.419415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.419552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.683 [2024-11-26 19:29:57.419566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.683 qpair failed and we were unable to recover it. 00:28:34.683 [2024-11-26 19:29:57.419800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.419818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.419958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.419973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.420186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.420205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.420359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.420373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.420513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.420526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.420774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.420793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.420952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.420968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.421206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.421223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.421401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.421416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.421498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.421512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.421650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.421666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.421830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.421848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.422003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.422019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.422196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.422212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.422362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.422379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.422513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.422527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.422699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.422715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.422861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.422876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.423043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.423058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.423264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.423280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.423484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.423500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.423702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.423721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.423885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.423900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.424147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.424161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.424406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.424432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.424585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.424600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.424753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.424771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.424914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.684 [2024-11-26 19:29:57.424929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.684 qpair failed and we were unable to recover it. 00:28:34.684 [2024-11-26 19:29:57.425075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.425092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.425230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.425246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.425448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.425461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.425695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.425712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.425990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.426007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.426206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.426222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.426401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.426416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.426560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.426575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.426710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.426725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.426872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.426884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.427056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.427070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.427216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.427233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.427371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.427386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.427563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.427578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.427808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.427826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.428061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.428080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.428221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.428238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.428395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.428408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.428490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.428501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.428662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.428694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.428845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.428862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.429096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.429112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.429370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.429387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.429618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.429639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.429884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.429899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.430050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.430064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.430238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.430257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.685 [2024-11-26 19:29:57.430409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.685 [2024-11-26 19:29:57.430424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.685 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.430636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.430652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.430805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.430823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.431072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.431092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.431255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.431268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.431448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.431461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.431691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.431712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.431888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.431903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.432008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.432023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.432113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.432126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.432307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.432325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.432530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.432548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.432651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.432664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.432867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.432881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.433095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.433113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.433367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.433384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.433615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.433631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.433836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.433855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.434090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.434106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.434196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.434208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.434404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.434418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.434563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.434580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.686 [2024-11-26 19:29:57.434738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.686 [2024-11-26 19:29:57.434755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.686 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.434907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.434921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.435128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.435144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.435359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.435377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.435611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.435627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.435884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.435902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.436053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.436070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.436298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.436314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.436397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.436410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.436505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.436518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.436660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.436689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.436919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.436939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.437193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.437210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.437310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.437326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.437424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.437440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.437658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.437686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.437845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.437858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.438057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.438075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.438301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.438318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.438547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.438563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.438711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.438728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.438971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.438991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.439133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.439146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.439284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.439297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.439385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.439398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.439597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.439616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.439824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.439841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.440015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.440031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.440260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.440280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.440424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.440440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.687 [2024-11-26 19:29:57.440654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.687 [2024-11-26 19:29:57.440674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.687 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.440831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.440845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.441076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.441095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.441246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.441261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.441491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.441507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.441738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.441759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.441977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.441994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.442133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.442146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.442289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.442303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.442455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.442472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.442737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.442755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.442923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.442939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.443182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.443205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.443350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.443365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.443527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.443542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.443619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.443630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.443776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.443791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.443936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.443951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.444202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.444219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.444450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.444466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.444617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.444632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.444860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.444880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.445022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.445036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.445259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.445273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.445481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.445500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.445748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.445765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.445922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.445938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.446185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.446205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.446434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.446451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.446709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.446725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.446951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.446970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.447199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.447214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.447464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.447481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.447688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.447708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.447947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.447962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.448116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.448130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.448333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.448351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.448490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.448505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.448653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.448668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.448846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.448866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.449099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.449117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.449256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.688 [2024-11-26 19:29:57.449271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.688 qpair failed and we were unable to recover it. 00:28:34.688 [2024-11-26 19:29:57.449494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.449508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.449730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.449748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.449902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.449918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.450121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.450137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.450289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.450304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.450469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.450487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.450742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.450761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.450993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.451006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.451156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.451171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.451352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.451369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.451602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.451618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.451763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.451781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.452027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.452045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.452186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.452204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.452370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.452388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.452598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.452615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.452846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.452865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.453075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.453090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.453267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.453283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.453466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.453484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.453707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.453725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.453954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.453972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.454227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.454245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.454401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.454414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.454573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.454587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.454689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.454704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.454932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.454949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.455181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.455197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.455353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.455369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.455585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.455604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.455759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.455773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.455908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.455921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.456148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.456165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.456255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.456269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.456408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.456423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.456625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.456641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.456896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.456918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.457153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.457171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.457413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.457430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.457654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.689 [2024-11-26 19:29:57.457678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.689 qpair failed and we were unable to recover it. 00:28:34.689 [2024-11-26 19:29:57.457888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.457905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.458161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.458177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.458412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.458433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.458658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.458681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.458821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.458835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.459056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.459072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.459284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.459302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.459562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.459579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.459728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.459745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.460010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.460030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.460235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.460249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.460383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.460397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.460537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.460553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.460689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.460706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.460936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.460951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.461102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.461117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.461387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.461406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.461638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.461654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.461808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.461823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.461993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.462009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.462258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.462275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.462494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.462510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.462657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.462679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.462908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.462926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.463156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.463170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.463335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.463354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.463585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.463603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.463834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.463852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.464002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.464017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.464168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.464185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.464406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.464423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.464574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.464587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.464811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.464828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.465034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.465051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.465214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.465229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.465413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.465430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.465660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.465684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.465841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.465857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.466085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.466099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.466305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.466323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.466413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.466427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.690 [2024-11-26 19:29:57.466523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.690 [2024-11-26 19:29:57.466538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.690 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.466709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.466725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.466818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.466831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.466979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.466995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.467175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.467191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.467329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.467347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.467495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.467510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.467691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.467708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.467937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.467955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.468181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.468196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.468394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.468410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.468572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.468593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.468808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.468824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.468921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.468938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.469167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.469186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.469263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.469275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.469429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.469446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.469529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.469540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.469760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.469775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.470040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.470060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.470244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.470259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.470434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.470450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.470663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.470688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.470840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.470857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.471083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.471097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.471179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.471190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.471269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.471281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.471502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.471521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.471627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.471642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.471806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.471824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.471922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.471936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.472071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.472086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.472305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.472325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.472541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.472555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.472650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.472662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.472890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.472908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-11-26 19:29:57.473134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.691 [2024-11-26 19:29:57.473151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.473356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.473372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.473525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.473540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.473751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.473771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.473953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.473970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.474200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.474213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.474412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.474430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.474709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.474728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.474812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.474826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.475056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.475073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.475215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.475230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.475404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.475421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.475648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.475662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.475814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.475830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.475987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.476003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.476169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.476184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.476432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.476449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.476697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.476717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.476860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.476874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.477093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.477107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.477332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.477351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.477439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.477453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.477652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.477667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.477905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.477922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.478190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.478211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.478420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.478434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.478578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.478592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.478759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.478778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.478956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.478972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.479200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.479216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.479369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.479385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.479552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.479569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.479775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.479792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.479937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.479950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.480123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.480138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.480343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.480361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.480509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.480524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.480662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.480684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.480891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.480909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.481153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.481171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.481320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.481334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.481476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.692 [2024-11-26 19:29:57.481489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-11-26 19:29:57.481565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.481577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.481710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.481743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.481915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.481930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.482082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.482097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.482243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.482259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.482489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.482507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.482734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.482756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.482997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.483015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.483250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.483267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.483502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.483516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.483614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.483626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.483775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.483794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.484033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.484050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.484209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.484226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.484379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.484395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.484546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.484563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.484736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.484754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.484958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.484972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.485118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.485131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.485346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.485364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.485581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.485597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.485847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.485864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.486021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.486037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.486214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.486229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.486367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.486380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.486626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.486644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.486802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.486821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.486997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.487014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.487181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.487202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.487356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.487373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.487551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.487569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.487721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.487738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.487993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.488007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.488152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.488166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.488250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.488263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.488493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.488512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.488744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.488761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.488910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.488925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.489167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.489185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.489340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.489356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.489455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.489467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.489610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.693 [2024-11-26 19:29:57.489624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-11-26 19:29:57.489741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.489757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.489848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.489862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.489952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.489966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.490119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.490133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.490283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.490299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.490509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.490525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.490738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.490759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.490949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.490963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.491114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.491128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.491221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.491234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3905609 Killed "${NVMF_APP[@]}" "$@" 00:28:34.694 [2024-11-26 19:29:57.491455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.491475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.491727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.491746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.491978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:34.694 [2024-11-26 19:29:57.491994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.492223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.492243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:34.694 [2024-11-26 19:29:57.492404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.492419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.492646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.492661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.694 [2024-11-26 19:29:57.492827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.492846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.694 [2024-11-26 19:29:57.493090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.493107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.694 [2024-11-26 19:29:57.493311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.493327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.493562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.493580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.493794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.493811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.493962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.493975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.494174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.494192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.494356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.494372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.494525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.494543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.494749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.494767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.494872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.494887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.495030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.495047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.495127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.495141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.495372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.495386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.495520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.495533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.495728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.495748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.496009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.496026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.496112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.496126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.496352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.496369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.496455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.496468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.496680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.496699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.496926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.496940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.497164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.694 [2024-11-26 19:29:57.497182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.694 qpair failed and we were unable to recover it. 00:28:34.694 [2024-11-26 19:29:57.497420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.497438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.497656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.497678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.497832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.497848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.497937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.497952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.498038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.498052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.498155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.498171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.498265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.498283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.498485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.498502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.498681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.498698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.498847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.498863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.499080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.499095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.499303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.499320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.499417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.499436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.499530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.499544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.499699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.499718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.499885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3906341 00:28:34.695 [2024-11-26 19:29:57.499901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.500009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.500023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.500178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.500195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3906341 00:28:34.695 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:34.695 [2024-11-26 19:29:57.500460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.500481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3906341 ']' 00:28:34.695 [2024-11-26 19:29:57.500635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.500649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.500808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.500823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.500921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.695 [2024-11-26 19:29:57.500941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 wit 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.695 h addr=10.0.0.2, port=4420 00:28:34.695 qpair failed and we were unable to recover it. 00:28:34.695 [2024-11-26 19:29:57.501050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.501065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.696 [2024-11-26 19:29:57.501295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.501313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.696 [2024-11-26 19:29:57.501564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.501581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.501811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.696 [2024-11-26 19:29:57.501832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.501986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.502002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 19:29:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.696 [2024-11-26 19:29:57.502148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.502164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.502336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.502351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.502586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.502604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.502703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.502718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.502803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.502818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.502966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.502981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.503119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.503133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.503285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.503302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.503404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.503421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.503662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.503685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.503907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.503925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.504081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.504098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.504238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.504253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.504457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.504473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.504609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.504624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.504797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.504815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.504917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.504932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.505078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.505092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.505271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.505284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.505455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.505471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.505547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.505561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.505724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.505750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.696 [2024-11-26 19:29:57.505926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.696 [2024-11-26 19:29:57.505942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.696 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.506051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.506065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.506216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.506232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.506472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.506495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.506650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.506665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.506772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.506785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.506932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.506948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.507094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.507111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.507333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.507350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.507452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.507469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.507680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.507698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.507922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.507942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.508139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.508157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.508393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.508408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.508569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.508585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.508697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.508716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.508861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.508893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.509045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.509060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.509236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.509252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.509536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.509559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.509731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.509746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.509899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.509914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.510006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.510019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.510096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.510109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.510199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.510213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.510302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.510332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.510440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.510459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.510597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.510614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.510790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.510808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.510970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.697 [2024-11-26 19:29:57.510990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.697 qpair failed and we were unable to recover it. 00:28:34.697 [2024-11-26 19:29:57.511082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.511098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.511268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.511284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.511372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.511395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.511604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.511620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.511788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.511807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.511908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.511924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.512072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.512089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.512188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.512203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.512293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.512307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.512403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.512418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.512511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.512526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.512664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.512697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.512913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.512930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.513135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.513149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.513353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.513369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.513513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.513531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.513680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.513697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.513902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.513919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.514067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.514083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.514174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.514189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.514457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.514493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.514641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.514660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.514770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.514787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.514886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.514903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.515046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.515063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.515326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.515345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.515431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.515443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.515593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.515610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.515684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.515714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.515869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.515886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.698 [2024-11-26 19:29:57.515959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.698 [2024-11-26 19:29:57.515971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.698 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.516058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.516072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.516160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.516189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.516299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.516313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.516399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.516414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.516620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.516641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.516888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.516909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.517111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.517152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.517383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.517414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.517652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.517694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.517957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.517989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.518226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.518257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.518447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.518478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.518668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.518709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.518910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.518941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.519180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.519211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.519431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.519464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.519586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.519617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.519769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.519802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.519976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.520007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.520187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.520219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.520443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.520474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.520643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.520679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.520802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.520820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.520949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.520962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.521161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.521174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.521266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.521279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.521364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.521380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.521538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.521556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.521652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.521676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.521748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.699 [2024-11-26 19:29:57.521762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-11-26 19:29:57.521862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.521874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.521967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.521983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.522128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.522143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.522262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.522297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.522473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.522504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.522626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.522657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.522853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.522886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.523020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.523052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.523179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.523211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.523320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.523352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.523461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.523493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.523676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.523711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.523839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.523862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.523940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.523952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.524022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.524033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.524136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.524150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.524287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.524308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.524446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.524459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.524605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.524681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.524695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.524764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.524777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.524844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.524858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.525000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.525016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.525120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.525134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.525273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.525289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.525365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.525376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.525561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.525577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.525656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.525687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.525760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.525773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.525858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.525873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-11-26 19:29:57.526001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.700 [2024-11-26 19:29:57.526014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.526176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.526192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.526446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.526465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.526550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.526567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.526666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.526687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.526843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.526872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.527079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.527093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.527175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.527186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.527263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.527275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.527416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.527434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.527573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.527587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.527733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.527750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.527840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.527853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.527937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.527951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.528036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.528049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.528189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.528206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.528296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.528309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.528447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.528462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.528537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.528548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.528639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.528650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.528856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.528874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.528946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.528960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.529110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.529125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.529230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.529245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.529329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.701 [2024-11-26 19:29:57.529343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-11-26 19:29:57.529432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.529445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.529521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.529535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.529618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.529630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.529716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.529731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.529805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.529818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.529950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.529966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.530971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.530982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.531129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.531144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.531244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.531259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.531402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.531416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.531557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.531571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.531736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.531754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.531836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.531849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.531987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.532005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.532156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.532173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.532261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.532274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.532347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.532359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.532424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.532436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.532521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.532534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.532686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.532703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.702 [2024-11-26 19:29:57.532790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.702 [2024-11-26 19:29:57.532807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.702 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.532895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.532910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.532979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.532992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.533944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.533955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.534903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.534917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.535918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.703 [2024-11-26 19:29:57.535931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.703 qpair failed and we were unable to recover it. 00:28:34.703 [2024-11-26 19:29:57.536068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.536933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.536948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.537838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.537999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.538102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.538205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.538303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.538407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.538593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.538686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.538837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.538983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.704 [2024-11-26 19:29:57.538995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.704 qpair failed and we were unable to recover it. 00:28:34.704 [2024-11-26 19:29:57.539144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.539156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.539232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.539243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.539379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.539397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.539467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.539481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.539564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.539578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.539664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.539686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.539762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.539775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.539919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.539933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.540923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.540936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.541009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.541023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.541090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.541103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.541180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.541192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.541283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.541296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.541437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.541451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.541722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.541742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.541848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.541863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.541936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.541954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.542031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.542043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.542105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.542115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.542267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.542280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.542377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.705 [2024-11-26 19:29:57.542392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.705 qpair failed and we were unable to recover it. 00:28:34.705 [2024-11-26 19:29:57.542473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.542487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.542589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.542605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.542746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.542763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.542836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.542849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.543017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.543207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.543377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.543477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.543581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.543662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.543757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.543855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.543998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.544885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.544898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.545106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.545126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.545278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.545290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.545432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.545444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.545578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.545593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.545684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.545701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.545845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.706 [2024-11-26 19:29:57.545860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.706 qpair failed and we were unable to recover it. 00:28:34.706 [2024-11-26 19:29:57.545944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.545958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.546184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.546200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.546274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.546288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.546426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.546443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.546524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.546538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.546684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.546705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.546793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.546812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.546897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.546913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.546984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.546999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.547079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.547095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.547237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.547255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.547348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.547364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.547439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.547456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.547519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.547530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.547612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.547625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.547711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.547725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.547741] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:28:34.707 [2024-11-26 19:29:57.547789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.547792] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.707 [2024-11-26 19:29:57.547806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.548011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.548029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-11-26 19:29:57.548099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.707 [2024-11-26 19:29:57.548112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.548187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.548200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.548283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.548297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.548487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.548501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.548589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.548602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.548690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.548707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.548840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.548856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.548924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.548941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.549869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.549884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.550970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.550986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-11-26 19:29:57.551065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.708 [2024-11-26 19:29:57.551080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.551281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.551298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.551450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.551466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.551555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.551570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.551656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.551678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.551754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.551769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.551851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.551865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.551956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.551972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.552858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.552993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.553954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.553968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.554112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.554125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.554194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.554207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.554280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.554296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.709 qpair failed and we were unable to recover it. 00:28:34.709 [2024-11-26 19:29:57.554371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.709 [2024-11-26 19:29:57.554385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.554457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.554471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.554564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.554577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.554732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.554748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.554885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.554900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.554980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.554993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.555874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.555886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.556924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.556938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.557012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.557025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.557167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.557180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.557250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.557275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.557362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.557375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.557507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.557523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.557609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.557624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.557714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.710 [2024-11-26 19:29:57.557730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.710 qpair failed and we were unable to recover it. 00:28:34.710 [2024-11-26 19:29:57.557809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.557823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.557894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.557908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.557973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.557987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.558193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.558208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.558359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.558375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.558527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.558543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.558684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.558697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.558783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.558796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.558929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.558943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.559853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.559869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.560061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.560167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.560251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.560327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.560502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.560707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.560821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.560923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.560991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.561005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.561071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.561084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.561156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.561171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.561248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.711 [2024-11-26 19:29:57.561261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.711 qpair failed and we were unable to recover it. 00:28:34.711 [2024-11-26 19:29:57.561337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.561350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.561425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.561439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.561580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.561597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.561732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.561745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.561823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.561836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.561912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.561924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.561987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.561999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.562980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.562996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.563939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.563955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.564098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.564112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.564178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.564189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.564257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.564269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.564345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.712 [2024-11-26 19:29:57.564361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.712 qpair failed and we were unable to recover it. 00:28:34.712 [2024-11-26 19:29:57.564425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.564438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.564514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.564527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.564660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.564685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.564767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.564782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.564863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.564876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.564954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.564968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.565860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.565872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.566018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.566033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.566102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.566117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.566182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.566197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.566285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.566299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.566450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.566464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.566526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.566539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.566620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.713 [2024-11-26 19:29:57.566636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.713 qpair failed and we were unable to recover it. 00:28:34.713 [2024-11-26 19:29:57.566769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.566785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.566863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.566876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.566958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.566971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.567909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.567924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.568009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.568022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.568105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.568118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.568321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.568338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.568490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.568508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.568578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.568592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.568682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.568698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.568784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.568798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.568938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.568950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.569095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.569109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.569186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.569199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.569277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.569292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.569429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.569444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.569515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.569529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.569688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.569705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.569778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.569792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.569928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.569944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.570086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.570102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.714 [2024-11-26 19:29:57.570180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.714 [2024-11-26 19:29:57.570195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.714 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.570282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.570297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.570375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.570390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.570477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.570488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.570561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.570573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.570639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.570651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.570735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.570748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.570883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.570898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.571930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.571948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.572930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.572945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.573022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.573037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.573104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.573118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.573183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.715 [2024-11-26 19:29:57.573197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.715 qpair failed and we were unable to recover it. 00:28:34.715 [2024-11-26 19:29:57.573337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.573353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.573429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.573447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.573513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.573527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.573604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.573618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.573719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.573732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.573814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.573826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.573912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.573923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.573998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.574073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.574172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.574260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.574409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.574488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.574661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.574917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.574933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.575906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.575997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.576011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.576086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.576099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.576192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.576207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.576349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.716 [2024-11-26 19:29:57.576368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.716 qpair failed and we were unable to recover it. 00:28:34.716 [2024-11-26 19:29:57.576457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.576471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.576581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.576596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.576794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.576813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.576897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.576909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.576988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.577919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.577932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.578958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.578980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.579120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.579140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.579216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.579229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.717 [2024-11-26 19:29:57.579384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.717 [2024-11-26 19:29:57.579399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.717 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.579532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.579545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.579630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.579642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.579725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.579739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.579808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.579822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.579910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.579924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.579987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.579999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.580070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.580085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.580164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.580181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.580283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.580296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.580550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.580567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.580676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.580694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.580854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.580868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.580950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.580963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.581883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.581896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.582900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.718 [2024-11-26 19:29:57.582914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.718 qpair failed and we were unable to recover it. 00:28:34.718 [2024-11-26 19:29:57.583083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.583097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.583247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.583261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.583341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.583354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.583430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.583443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.583516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.583529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.583613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.583625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.583743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.583759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.583906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.583925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.583995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.584102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.584318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.584411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.584580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.584685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.584774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.584872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.584958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.584973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.585056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.585070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.585142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.585155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.585245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.585259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.585330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.585344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.585435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.585449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.585586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.585602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.719 [2024-11-26 19:29:57.585702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.719 [2024-11-26 19:29:57.585714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.719 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.585807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.585818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.585898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.585910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.586060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.586075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.586214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.586230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.586305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.586320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.586464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.586479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.586563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.586578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.586784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.586806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.586883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.586896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.586982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.586995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.587076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.587094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.587238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.587252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.587397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.587415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.587558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.587572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.587712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.587729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.587808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.587821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.587973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.587991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.588963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.588978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.589114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.589128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.720 qpair failed and we were unable to recover it. 00:28:34.720 [2024-11-26 19:29:57.589265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.720 [2024-11-26 19:29:57.589279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.589352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.589368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.589437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.589450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.589520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.589534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.589627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.589643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.589793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.589809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.589906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.589922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.589995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.590982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.590996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.591966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.591983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.592074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.592089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.592159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.592171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.592308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.592324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.592424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.721 [2024-11-26 19:29:57.592438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.721 qpair failed and we were unable to recover it. 00:28:34.721 [2024-11-26 19:29:57.592523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.592536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.592618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.592630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.592763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.592779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.592880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.592896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.592995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.593104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.593266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.593366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.593481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.593635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.593763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.593866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.593954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.593969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.594955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.594968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.595049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.595063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.595131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.595144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.595210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.595224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.595319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.595332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.595402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.595415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.595490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.595502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.595571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.722 [2024-11-26 19:29:57.595584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.722 qpair failed and we were unable to recover it. 00:28:34.722 [2024-11-26 19:29:57.595659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.595680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.595814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.595830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.595900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.595911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.596932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.596947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.597942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.597955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.598025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.598035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.598119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.598131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.598194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.598206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.598282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.598292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.598367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.598380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.598447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.598459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.598534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.723 [2024-11-26 19:29:57.598544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.723 qpair failed and we were unable to recover it. 00:28:34.723 [2024-11-26 19:29:57.598692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.598705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.598829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.598840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.598909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.598919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.599001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.599012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.599150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.599164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.599314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.599326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.599408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.599420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.599555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.599567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.599647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.599659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.599780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.599793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.599863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.599873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.600964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.600977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.601047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.601059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.601142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.601153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.601235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.601247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.601384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.601396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.601517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.601531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.601597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.601610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.601696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.601712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.601795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.601807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.602005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.602016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.602099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.602109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.602169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.724 [2024-11-26 19:29:57.602180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.724 qpair failed and we were unable to recover it. 00:28:34.724 [2024-11-26 19:29:57.602407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.602418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.602551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.602562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.602626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.602637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.602775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.602787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.602871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.602881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.602951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.602961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.603053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.603065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.603195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.603205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.603306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.603317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.603449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.603459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.603651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.603663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.603761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.603772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.603906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.603917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.604823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.604999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.605009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.605104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.605115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.605243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.605253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.605493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.605503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.605661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.605678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.605796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.605806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.605898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.605908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.606036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.606045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.606111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.606120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.606302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.606312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.606404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.725 [2024-11-26 19:29:57.606415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.725 qpair failed and we were unable to recover it. 00:28:34.725 [2024-11-26 19:29:57.606519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.606529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.606747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.606759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.606891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.606901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.606991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.607063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.607163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.607338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.607480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.607575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.607655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.607881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.607971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.607980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.608048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.608059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.608148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.608159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.608379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.608389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.608518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.608528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.608756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.608768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.608911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.608921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.609942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.609952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.610021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.610031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.610102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.610112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.610205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.610215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.610402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.610412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.610634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.610645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.610721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.610736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.610879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.610890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.610973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.610983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.611118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.611128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.611215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.611224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.611299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.726 [2024-11-26 19:29:57.611310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.726 qpair failed and we were unable to recover it. 00:28:34.726 [2024-11-26 19:29:57.611444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.611454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.611578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.611590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.611762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.611774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.611909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.611920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.612047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.612059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.612202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.612212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.612372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.612383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.612471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.612480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.612624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.612641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.612794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.612809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.612888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.612901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.612978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.612992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.613072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.613086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.613158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.613173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.613304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.613317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.613515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.613529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.613627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.613641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.613715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.613729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.613801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.613816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.613910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.613924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.614932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.614946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.727 [2024-11-26 19:29:57.615827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.727 qpair failed and we were unable to recover it. 00:28:34.727 [2024-11-26 19:29:57.615953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.615967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.616041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.616194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.616280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.616437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.616720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.616812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.616916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.616993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.617941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.617955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.618983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.618997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.619068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.619081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.619156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.619169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.619244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.619258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.619329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.619341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.619407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.619420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.619492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.619506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.728 qpair failed and we were unable to recover it. 00:28:34.728 [2024-11-26 19:29:57.619576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.728 [2024-11-26 19:29:57.619589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.619665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.619706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.619774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.619787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.619928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.619941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.620945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.620959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.621926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.621939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.622925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.622942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.623020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.623037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.623126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.623143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.623216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.623232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.623382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.623398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.623562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.623578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-11-26 19:29:57.623739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-11-26 19:29:57.623758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.623854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.623871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.624015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.624033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.624111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.624127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.624235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.624253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.624342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.624358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.624585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.624603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.624680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.624698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.624843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.624860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.624947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.624963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.625066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.625084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.625166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.625183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.625329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.625347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.625438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.625456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.625611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.625628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.625741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.625761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.625838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.625855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.626957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.626974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.627062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.627079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.627247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.627264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.627339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.627356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.627457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.627475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.627624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.627640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.627727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.627746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.627885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.627902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.627977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.627993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.628096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.628113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.628209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-11-26 19:29:57.628226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-11-26 19:29:57.628306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.628322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.628406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.628423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.628494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.628510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.628581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.628599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.628704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.628722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.628803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.628819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.628908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.628925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.629952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.629969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.630062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.630078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.630152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.630169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.630277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.630295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.630395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.630412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.630556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.630573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.630652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.630677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.630768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.630785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.630887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.630905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.631064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.631081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.631169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.631186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.631326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.631343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.631484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.631502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.631585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.631602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.631707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.631725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.631797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.631813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.631903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.631921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-11-26 19:29:57.632820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-11-26 19:29:57.632839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.632992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.633094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.633261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.633363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.633474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.633585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.633702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.633817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.633925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.633945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.634946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.634966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.635042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.635209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.635230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.635318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.635337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.635443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.635464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.635537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.635556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.635639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.635659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.635751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.635770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.635851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.635871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.636055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.636182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.636289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.636387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.636504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.636616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.732 [2024-11-26 19:29:57.636771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.636792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.636902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.636989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.637009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.637159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.637178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.637257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.637277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.637370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.637390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.637495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.637517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.637599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.637618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.637711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-26 19:29:57.637732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-26 19:29:57.637830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.637849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.637936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.637956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.638040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.638059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.638162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.638181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.638344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.638364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.638517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.638537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.638620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.638639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.638814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.638836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.638918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.638937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.639026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.639047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.639130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.639150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.639236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.639254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.639366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.639387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.639469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.639489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.639613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.639634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.639807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.639829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.639917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.639935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.640156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.640178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.640325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.640346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.640455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.640476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.640620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.640640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.640732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.640754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.640906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.640927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.641083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.641105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.641198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.641216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.641324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.641344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.641505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.641525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.641607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.641627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.641748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.641769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.641844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.641863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.641946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.641965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.642048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.642068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.642159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.642180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.642338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.642358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.642444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.642464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.642541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.642560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.642732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.642757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.642858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.642879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.642978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.642999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.643085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.643105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.643193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.643214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.643297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.643318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.643398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.643418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.643507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-26 19:29:57.643528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-26 19:29:57.643618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.643639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.643762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.643790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.643878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.643899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.644006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.644027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.644119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.644141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.644241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.644262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.644499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.644522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.644607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.644629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.644828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.644854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.644938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.644957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.645130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.645152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.645253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.645273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.645371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.645392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.645629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.645652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.645873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.645896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.645992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.646013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.646239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.646261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.646429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.646451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.646543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.646718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.646742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.646846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.646867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.646980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.647000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.647090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.647111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.647331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.647353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.647501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.647523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.647739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.647762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.647939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.647961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.648067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.648088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.648274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.648301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.648467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.648489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.648658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.648701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.648812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.648834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.648999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.649020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.649124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.649146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.649294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.649315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.649415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.649435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.649665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.649695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.649817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-26 19:29:57.649838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-26 19:29:57.649929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.649949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.650110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.650132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.650312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.650333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.650588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.650610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.650813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.650836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.650948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.650971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.651091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.651112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.651255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.651277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.651380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.651400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.651667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.651732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.651846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.651868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.652067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.652088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.652182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.652202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.652442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.652464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.652692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.652719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.652835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.652861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.652988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.653014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.653173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.653199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.653388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.653415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.653684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.653712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.653873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.653899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.654090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.654117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.654229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.654255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.654492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.654518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.654727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.654754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-26 19:29:57.654917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-26 19:29:57.654944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.655119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.655146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.655254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.655280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.655522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.655549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.655641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.655666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.655854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.655880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.656156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.656230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.656492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.656537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.656812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.656846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.656988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.657020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.657189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.657220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.657476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.657509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.657726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.657759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.657886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.657918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.658097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.658129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.658264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.658296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.658422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.658453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.658699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.658733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.658867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.658895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.659058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.659089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.659260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.659287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.659515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.659542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.659650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.659684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.659888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.659915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.660165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.660191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.660352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.660377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.660539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.660564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.660744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.660772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.660895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.660922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.661092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.661119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.661295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.661322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.661532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.661558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.661740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-26 19:29:57.661768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-26 19:29:57.661949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.661984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.662173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.662205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.662463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.662494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.662739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.662774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.662913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.662944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.663060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.663091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.663217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.663251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.663522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.663553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.663743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.663776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.663965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.663999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.664131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.664162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.664458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.664490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.664687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.664720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.664844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.664884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.665009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.665042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.665148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.665180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.665353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.665385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.665623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.665656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.665845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.665878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.666011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.666043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.666217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.666247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.666459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.666490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.666688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.666722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.666847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.666880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.667129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.667160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.667357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.667388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.667642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.667684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.667877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.667909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.668051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.668084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.668212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.668244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.668435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.668466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.668596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.668628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.668801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.668833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.668964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.668995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.669184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.669217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.669415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.669447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-26 19:29:57.669661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-26 19:29:57.669704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.669833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.669865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.670038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.670070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.670177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.670338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.670378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.670643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.670687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.670932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.670965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.671070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.671103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.671335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.671366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.671624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.671656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.671878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.671910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.672046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.672078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.672267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.672300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.672479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.672511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.672694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.672726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.672866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.672898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.673075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.673108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.673248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.673289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.673543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.673574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.673810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.673844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.674027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.674060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.674181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.674214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.674457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.674489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.674701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.674734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.674903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.674936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.675075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.675107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.675292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.675324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.675533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.675566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.675695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.675728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.675859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.675892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.676127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.676160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.676418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.676451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.676562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-26 19:29:57.676594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-26 19:29:57.676802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.676836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.676957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.676989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.677253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.677285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.677467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.677500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.677724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.677759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.677999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.678033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.678165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.678198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.678381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.678414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.678606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.678638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.678766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.739 [2024-11-26 19:29:57.678794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.739 [2024-11-26 19:29:57.678789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.678804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.739 [2024-11-26 19:29:57.678811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.739 [2024-11-26 19:29:57.678819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.739 [2024-11-26 19:29:57.678822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.679004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.679036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.679160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.679191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.679312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.679342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.679522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.679553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.679794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.679828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.680014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.680046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.680284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.680317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.680556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.680487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:34.739 [2024-11-26 19:29:57.680589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.680733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.739 [2024-11-26 19:29:57.680595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:34.739 [2024-11-26 19:29:57.680734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:34.739 [2024-11-26 19:29:57.680837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.680871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.681022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.681054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.681176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.681206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.681495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.681533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.681663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.681707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.681912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.681945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.682078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.682109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.682330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.682362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.682543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.682575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.682860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.682894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-26 19:29:57.683029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-26 19:29:57.683062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.683195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.683227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.683405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.683437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.683684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.683719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.683935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.683967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.684083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.684114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.684251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.684283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.684463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.684495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.684614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.684646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.684831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.684863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.685042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.685074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.685348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.685381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.685603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.685635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.685759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.685792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.685940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.685980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.686172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.686203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.686383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.686415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.686590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.686621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.686818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.686853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.687098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.687130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.687520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.687554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.687805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.687841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.688059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.688091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.688354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.688387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.688634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.688668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.688794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.688827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.688957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.688989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.689190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.689223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.689516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.689550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.689754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.689789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.690038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.690072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.690197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.690230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.690431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.690464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-26 19:29:57.690591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-26 19:29:57.690631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.690845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.690880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.691010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.691043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.691243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.691276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.691549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.691583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.691762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.691798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.691971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.692004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.692318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.692354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.692483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.692516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.692642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.692683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.692831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.692864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.693059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.693092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.693304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.693337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.693479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.693513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.693699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.693735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.693930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.693965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.694079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.694111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.694315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.694348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.694470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.694502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.694621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.694653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.694775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.694808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.694923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.694956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.695090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.695123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.695297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.695330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.695542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.695574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.695704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.695738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.695885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.695918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.696119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.696152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.696358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.696392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.696595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.696628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.696812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.696845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.696964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.696995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.697126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.697159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.697381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.697414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.697592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.697625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.697824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.697857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.698000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.698031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-26 19:29:57.698156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-26 19:29:57.698188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.698313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.698344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.698586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.698618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.698908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.698957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.699168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.699200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.699528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.699561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.699759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.699793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.699983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.700015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.700142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.700175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.700412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.700444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.700588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.700620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.700851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.700885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.701006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.701039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.701260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.701293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.701480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.701512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.701722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.701755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.701898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.701930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.702127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.702161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.702354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.702385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.702557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.702589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.702843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.702877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.702992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.703025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.703204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.703237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.703426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.703458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.703635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.703667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.703823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.703855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.703983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.704014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.704153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.704185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.704464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.704497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.704626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.704660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.704801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.704834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.704945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.704976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.705109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.705142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.705273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.705305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.705504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.705536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.705718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-26 19:29:57.705753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-26 19:29:57.705858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.705891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.706077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.706109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.706239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.706272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.706385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.706418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.706602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.706634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.706770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.706803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.706946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.706979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.707105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.707144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.707271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.707313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.707446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.707479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.707599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.707631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.707746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.707779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.707909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.707943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.708118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.708150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.708395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.708428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.708546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.708579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.708704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.708737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.708929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.708961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.709079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.709111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.709229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.709261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.709392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.709424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.709557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.709590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.709708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.709741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.709846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.709879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.710018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.710050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.710158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.710190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.710299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.710330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.710443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.710476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.710590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.710623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.710746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.710779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.710965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.710997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.711193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.711226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.711396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.711435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.711552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.711584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.711727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.711761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.711973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.712006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.712181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.712212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.712393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.712426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-26 19:29:57.712617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-26 19:29:57.712654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.712783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.712816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.712986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.713018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.713123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.713156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.713269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.713301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.713412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.713445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.713557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.713590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.713707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.713740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.713863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.713895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.714020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.714061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.714238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.714269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.714446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.714479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.714583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.714615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.714749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.714782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.714952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.714983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.715096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.715128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.715233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.715264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.715394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.715427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.715600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.715632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.715825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.715860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.716035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.716067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.716175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.716208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.716457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.716489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.716602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.716634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.716822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.716855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.717025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.717058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.717277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.717309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.717547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.717580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.717845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.717878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.718011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.718043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.718186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.718217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.718468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.718500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.718749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.718783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-26 19:29:57.718911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-26 19:29:57.718943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.719122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.719153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.719355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.719386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.719630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.719663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.719955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.719987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.720107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.720138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.720423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.720455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.720626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.720658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.720794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.720826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.720955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.720987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.721251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.721284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.721528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.721560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.721743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.721778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.721951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.721983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.722109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.722140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.722344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.722376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.722581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.722619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.722890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.722924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.723064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.723097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.723228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.723260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.723437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.723470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.723686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.723719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.723920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.723960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.724084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.724116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.724351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.724383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.724552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.724584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.724705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.724738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.724862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.724894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.725014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.725047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.725234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.725266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.725524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.725558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.725810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.725845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.726058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.726090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.726234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.726268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.726408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.726441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.726628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.726661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.726854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.726887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.727000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-26 19:29:57.727033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-26 19:29:57.727161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.727194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.727315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.727348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.727541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.727574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.727859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.727893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.728158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.728193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.728532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.728605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.728874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.728926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.729121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.729154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.729288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.729321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.729582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.729613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.729816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.729851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.729985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.730016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.730147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.730178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.730392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.730423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.730663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.730709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.730897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.730928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.731166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.731199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.731377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.731408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.731581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.731612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.731785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.731817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.732024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.732056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.732250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.732280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.732494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.732526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.732693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.732727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.732909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.732940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.733076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.733106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.733336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.733367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.733566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.733597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.733725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.733757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.733891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.733921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.734097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.734127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.734301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.734332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.734560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.734596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.734802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.734836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.735025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.735057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.735263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.735294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.735556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.735588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.735724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.735758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-26 19:29:57.735994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-26 19:29:57.736026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.736211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.736243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.736455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.736487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.736682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.736715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.736954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.736987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.737192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.737224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.737516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.737548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.737691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.737732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.737862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.737894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.738086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.738118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.738320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.738352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.738649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.738721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.738913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.738946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.739125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.739157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.739409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.739442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.739685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.739719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.739928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.739960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.740103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.740135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.740325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.740358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.740573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.740605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.740840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.740874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.741186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.741219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.741351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.741383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.741508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.741540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.741771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.741814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.741966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.742012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.742302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.742347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.742478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.742512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.742751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.742785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.742977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.743010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.743231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.743264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.743460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.743492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.743682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.743715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.743833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.743865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.744015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.744061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.744266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.744300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.744566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.744600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.744781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.744815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-26 19:29:57.744997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-26 19:29:57.745028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-26 19:29:57.745219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-26 19:29:57.745251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-26 19:29:57.745437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-26 19:29:57.745469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-26 19:29:57.745663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-26 19:29:57.745706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-26 19:29:57.745918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-26 19:29:57.745952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-26 19:29:57.746214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-26 19:29:57.746246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.746428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.746460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.746725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.746759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.746903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.746934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.747071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.747103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.747224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.747257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.747461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.747494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.747695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.747729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.747919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.747950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.748147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.748178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.748403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.748434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.748625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.748657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.748870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.748903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.749039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.749071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.749247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.749278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.749451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.749483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.749688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.749721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.749911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.749944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.750136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.750168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.750379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-11-26 19:29:57.750412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-11-26 19:29:57.750681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.750715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.750906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.750938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.751122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.751154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.751375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.751408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.751584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.751617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.751798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.751831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.751960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.751992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.752160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.752192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.752395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.752428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.752598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.752631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.752795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.752828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.753089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.753126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.753327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.753358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.753623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.753656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.753854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.753887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.753993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.754026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.754162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.754195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.754401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.754433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.754680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.754713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.754826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.754858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.754994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.755025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.755155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.755187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.755329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.755362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.755597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.755629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.755903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.755936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.756064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.756097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.756264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.756295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.756472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.756504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.756616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.756649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.756813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.756845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.756980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.757013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.757137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.757168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.757358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.757391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.757681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.757715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.757839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.757871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.758068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.758100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.758242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.758275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-11-26 19:29:57.758452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-11-26 19:29:57.758483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.758620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.758653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.758788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.758820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.758953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.758985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.759226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.759256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.759436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.759468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.759711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.759745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.759960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.759992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.760197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.760228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.760602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.760634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.760866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.760899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.761029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.761060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.761368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.761400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.761596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.761629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.761789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.761827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.761971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.762004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.762148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.762180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.762359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.762391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.762499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.762531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.762734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.762767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.763028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.763060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.763249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.763280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.763538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.763569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.763708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.763742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.763874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.763905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.764086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.764119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.764361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.764393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.764578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.764609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.764818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.764853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.765045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.765077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.765210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.765242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.765424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.765456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.765664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.765707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.765913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.765945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.766055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.766088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.766214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.766245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.766416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.766448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.766562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.766593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-11-26 19:29:57.766790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-11-26 19:29:57.766824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.766956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.766987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.767177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.767208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.767488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.767522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.767753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.767787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.767919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.767950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.768158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.768191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.768495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.768526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.768701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.768734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.768921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.768953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.769130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.769161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.769392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.769423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.769599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.769630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.769879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.769913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.770098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.770130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.770273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.770305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.770479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.770515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.770620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.770652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.770847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.770880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.771116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.771147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.771282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.771314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.771538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.771570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.771692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.771725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.771923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.771954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.772179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.772212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.772407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.772439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.772667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.772719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.772864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.772896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.773132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.773164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.773405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.773438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.773637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.773680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.773861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.773892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.774017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.774049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.774254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.774286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.774396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.774426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.774623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.774655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.774798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.774829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.774958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.774990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-11-26 19:29:57.775160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-11-26 19:29:57.775192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.775399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.775431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.775556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.775588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.775780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.775814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.776060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.776091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.776342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.776374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.776640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.776681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.776880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.776912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.777104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.777136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.777242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.777274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.777513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.777545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.777766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.777799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.777942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.777973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.778160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.778192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.778402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.778434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.778682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.778716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.778835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.778867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.779061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.779093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.779370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.779408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.779688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.779722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.779903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.779935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.780107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.780140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.780270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.780302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.780492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.780524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.780809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.780843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.780984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.781017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.781130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.781161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.781353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.781386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.781622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.781655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.781788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.781820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.782011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.782043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.782324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.782356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.782547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.782580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.782790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.782824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.783037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.783068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.783183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.783217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-11-26 19:29:57.783486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-11-26 19:29:57.783518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.783756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.783788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.783977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.784009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.784221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.784252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.784508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.784540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.784722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.784755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.784883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.784916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.785113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.785144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.785363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.785395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.785585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.785617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.785869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.785902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.786113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.786144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.786355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.786388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.786602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.786633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.786859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.786892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.787074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.787106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.787308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.787339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.787532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.787564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.787754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.787789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.788028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.788059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.788251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.788283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.788484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.788517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.788784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.788823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.789025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.789057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.789228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.789260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.789480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.789512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.789706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.789739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.789866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.789898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.790104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.790136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.790363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.790394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.790574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.790606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.790714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.790746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.790938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.790969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.791177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.791209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.791377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.791409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-11-26 19:29:57.791592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-11-26 19:29:57.791625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.791896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.791930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.792114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.792146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.792275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.792307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.792561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.792607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.792808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.792842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.792966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.792998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.793237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.793268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.793478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.793509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.793628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.793661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.793852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.793884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.794003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.794035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.794156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.794188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.794429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.794461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.794691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.794725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.794905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.794938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.795054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.795084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.795213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.795244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.795517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.795548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.795737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.795770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.795948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.795980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.796168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.796200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.796326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.796358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.796545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.796577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.796801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.796833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.796954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.796985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.797111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.797143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.797255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.797293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.797559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.797592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.797789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.797822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.797956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.797987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.798108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.798140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.798397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.798430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.798571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.798602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.798786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.798818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.798994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.799026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.799207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.799239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.799372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.799405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-11-26 19:29:57.799603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-11-26 19:29:57.799635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.799846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.799878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.800016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.800048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.800199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.800232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.800407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.800439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.800681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.800716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.800850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.800882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.801021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.801052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.801224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.801255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.801433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.801465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.801587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.801618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.801772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.801805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.801991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.802023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.802163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.802195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.802327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.802360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.802615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.802647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.802825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.802861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.803056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.803088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.803361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.803393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.803571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.803604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.803789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.803822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.803951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.803982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.804178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.804210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.804318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.804349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.804555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.804587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.804772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.804806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.804952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.804984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.805177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.805209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.805399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.805431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.805642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.805689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.805810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.805842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.806022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.806054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.806193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.806226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.806496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.806527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.806773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.806806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.806922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.806954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.807079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.807111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.807292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.807324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-11-26 19:29:57.807508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-11-26 19:29:57.807539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.807792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.807826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.808082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.808119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.808371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.808403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.808584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.808616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.808802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.808835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.808964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.808996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.809129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.809160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.809397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.809429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.809712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.809745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.809928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.809960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.810139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.810171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.810436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.810467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.810646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.810686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.810824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.810856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.811063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.811095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.811217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.811249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.811482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.811513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.811744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.811804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.812010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.812046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.812323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.812355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.812544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.812576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.812766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.812801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.812940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.812972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.813178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.813211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.813468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.813500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.813689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.813722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.813959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.813991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.814179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.814211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.814407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.814438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.814550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.814582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.814827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.814869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.815071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.815104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.815410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.815442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.815620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.815651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.815898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.815931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.816037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.816070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.816205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.816237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-11-26 19:29:57.816362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-11-26 19:29:57.816394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.816570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.816603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.816724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.816757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.816947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.816980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.817257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.817290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.817464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.817496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.817685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.817718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.817860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.817892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.818037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.818068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.818276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.818307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.818502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.818535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.818723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.818757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.818945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.818977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.819106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.819139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.819269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.819301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.819491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.819523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.819818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.819852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.820044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.820075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.820213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.820245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.820352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.820384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.820503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.820540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.820810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.820843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.820971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.821002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.821182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.821214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.821349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.821380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.821624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.821656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.821842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.821874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.821997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.822028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.822128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.822160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.822277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.822309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.822489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.822522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.822733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.822766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.823002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.823035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-11-26 19:29:57.823173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-11-26 19:29:57.823209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.823322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.823354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.823567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.823599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.823719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.823753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.823890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.823922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.824162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.824194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.824331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.824363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.824542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.824573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.824693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.824726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.824967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.824999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.825213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.825244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.825469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.825502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.825689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.825722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.825942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.825975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.826202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.826234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.826402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.826434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.826636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.826668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.826804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.826840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.826967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.826999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.827185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.827217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.827402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.827433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.827682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.827715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.827883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.827915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.828106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.828137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.828382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.828413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.828545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.828577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.828753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.828786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.829078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.829114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.829397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.829429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.829702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.829736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.829924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.829958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.830099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.830131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.830264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.830296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.830564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.830597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.830759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.830792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.830988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.831020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.831140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.831172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.831356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.831389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.032 qpair failed and we were unable to recover it. 00:28:35.032 [2024-11-26 19:29:57.831663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.032 [2024-11-26 19:29:57.831705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.831884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.831917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.832090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.832128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.832322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.832354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.832589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.832621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.832949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.832982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.833106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.833138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.833360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.833392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.833576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.833608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.833789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.833821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.834003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.834036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.834235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.834266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.834479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.834510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.834703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.834736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.834879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.834911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.835107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.835139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.835346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.835379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.835636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.835680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.835873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.835905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.836038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.836069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.836194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.836227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.836414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.836447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.836628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.836660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.836802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.836834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.836972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.837004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.837139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.837171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.837439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.837471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.837604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.837635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.837881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.837915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.838037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.838074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.838318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.838349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.838543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.838575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.838758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.838791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.838991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.839023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.839216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.839249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.839385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.839417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.839612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.839644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.839786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.839817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.033 qpair failed and we were unable to recover it. 00:28:35.033 [2024-11-26 19:29:57.840007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.033 [2024-11-26 19:29:57.840039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.840226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.840258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.840495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.840526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.840641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.840681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.840872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.840910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.841115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.841148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.841332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.841363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.841552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.841584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.841771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.841804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.841925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.841956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.842087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.842119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.842251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.842283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.842528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.842559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.842685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.842718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.842846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.842878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.843142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.843173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.843512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.843544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.843658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.843700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.843914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.843945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.844145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.844177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.844490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.844521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.844795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.844828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.844944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.844976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.845168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.845201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.845476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.845509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.845635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.845666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.845872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.845905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.846099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.846131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.846342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.846375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.846558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.846591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.846829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.846861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.846974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.847009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.847186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.847217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.847466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.847499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.847764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.847798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.847930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.847961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.848225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.848257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.848536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.034 [2024-11-26 19:29:57.848569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.034 qpair failed and we were unable to recover it. 00:28:35.034 [2024-11-26 19:29:57.848689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.848722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.848958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.848990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.849174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.849206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.849496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.849528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.849646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.849702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.849888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.849921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.850098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.850135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.850273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.850305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.850568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.850600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.850793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.850827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.850965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.850998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.851259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.851291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.851578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.851609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.851917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.851950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.852094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.852126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.852394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.852426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.852606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.852637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.852843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.852876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.853005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.853036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.853228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.853259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.853489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.853520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.853713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.853753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.853943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.853975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.854117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.854149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.854387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.854419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.854541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.854572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.854749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.854781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.854921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.854953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.855145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.855176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.855291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.855322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.855561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.855592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.855766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.855798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.855983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.856015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.856220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.856256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.856465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.856496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.856692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.856726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.856841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.856871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.857011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.035 [2024-11-26 19:29:57.857043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.035 qpair failed and we were unable to recover it. 00:28:35.035 [2024-11-26 19:29:57.857172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.857203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.857393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.857426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.857630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.857661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.857871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.857903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.858023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.858055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.858254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.858285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.858406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.858437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.858555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.858586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.858787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.858819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.858961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.858994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.859185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.859218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.859445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.859477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.859667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.859710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.859902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.859934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.860198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.860228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.860434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.860465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.860717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.860750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.860876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.860908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.861109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.861142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.861338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.861370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.861581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.861612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.861758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.861790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.862047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.862078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.862266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.862297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.862481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.862512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.862755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.862787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.862998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.863029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.863167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.863199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.863425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.863456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.863645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.863687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.863874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.863906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.864030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.864061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.864199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.864231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.864518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.864550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.036 [2024-11-26 19:29:57.864791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.036 [2024-11-26 19:29:57.864824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.036 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.865028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.865066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.865285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.865318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.865506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.865537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.865717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.865750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.865875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.865907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.866039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.866071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.866327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.866359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.866564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.866597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.866753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.866787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.867045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.867077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.867398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.867430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.867605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.867637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.867858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.867891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.868071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.868103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.868402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.868435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.868569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.868601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.868850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.868883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.869112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.869143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.869402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.869434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.869615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.869647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.869880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.869912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.870093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.870124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.870475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.870507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.870686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.870719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.870900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.870932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.871046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.871078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.871256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.871288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.871496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.871528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.871710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.871744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.871868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.871899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.872079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.872111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.872412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.872445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.872686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.872719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.872919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.872951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.873139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.873171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.873299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.873330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.873523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.873554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.037 [2024-11-26 19:29:57.873687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.037 [2024-11-26 19:29:57.873719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.037 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.873945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.873977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.874177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.874209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.874331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.874368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.874580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.874612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.874917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.874949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.875135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.875167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.875349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.875381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.875652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.875706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.875876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.875909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.876043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.876075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.876332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.876364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.876599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.876631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.876773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.876807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.877071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.877102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.877238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.877269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.877473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.877505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.877636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.877667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.877826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.877860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.878039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.878072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.878350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.878382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.878512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.878545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.878816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.878850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.879098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.879131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.879437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.879469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.879727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.879760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.879895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.879927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.880063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.880095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.880214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.880247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.880420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.880452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.880655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.880707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.880833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.880865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.880987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.881018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.881210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.881241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.881455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.881487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.881685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.881719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.881991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.882023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.882193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.038 [2024-11-26 19:29:57.882225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.038 qpair failed and we were unable to recover it. 00:28:35.038 [2024-11-26 19:29:57.882356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.882388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.882591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.882623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.882769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.882802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.883061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.883092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.883202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.883233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.883432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.883469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.883644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.883686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.883879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.883911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.884024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.884056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.884196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.884227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.884489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.884521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.884713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.884746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.885005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.885036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.885220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.885252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.885546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.885578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.885760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.885793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.885944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.885974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.886080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.886113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.886327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.886358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.886561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.886593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.886786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.886819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.887004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.887035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.887153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.887185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.887318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.887351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.887587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.887617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.887741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.887773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.887881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.887913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.888177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.888208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.888337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.888369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.888553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.888585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.888828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.888862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.888991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.889022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.889207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.889239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.889365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.889397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.889657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.889696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.889885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.889916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.890039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.890070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.890239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.890270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.039 qpair failed and we were unable to recover it. 00:28:35.039 [2024-11-26 19:29:57.890403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.039 [2024-11-26 19:29:57.890434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.890606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.890637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.890859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.890892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.891080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.891110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.891245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.891277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.891448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.891481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.891602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.891632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.891756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.891794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.891921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.891952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.892076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.892108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.892239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.892271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.892506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.892537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.892719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.892752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.892877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.892909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.893028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.893060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.893232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.893264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.893379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.893410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.893582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.893613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.893795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.893829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.894009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.894041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.894150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.894181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.894299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.894331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.894512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.894543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.894733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.894767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.894881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.894913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.895099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.895132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.895235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.895266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.895395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.895427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.895541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.895573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.895705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.895738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.895838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.895870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.896038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.896069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.896249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.896281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.896461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.896494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.040 [2024-11-26 19:29:57.896611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.040 [2024-11-26 19:29:57.896644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.040 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.896771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.896802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.896920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.896952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.897137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.897168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.897289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.897321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.897522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.897554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.897655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.897696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.897832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.897864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.898103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.898135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.898315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.898346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.898525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.898557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.898681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.898714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.898820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.898853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.899089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.899127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.899302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.899333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.899446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.899478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.899597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.899628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.899815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.899845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.899973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.900005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.900196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.900229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.900399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.900430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.900549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.900581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.900765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.900800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.900914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.900945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.901053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.901084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.901252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.901284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.901532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.901563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.901687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.901721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.901848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.901880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.902014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.902046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.902231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.902263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.902451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.902484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.902699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.902731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.902865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.902897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.903002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.903033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.903218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.903250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.903490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.903522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.903707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.903739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.041 [2024-11-26 19:29:57.904027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.041 [2024-11-26 19:29:57.904059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.041 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.904401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.904433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.904616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.904648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.904839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.904871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.905134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.905167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.905359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.905391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.905590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.905622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.905764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.905797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.906042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.906073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.906288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.906320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.906435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.906467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.906716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.906762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.906947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.906978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.907168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.907200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.907382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.907414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.907709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.907748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.907930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.907961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.908087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.908120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.908305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.908336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.908452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.908484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.908748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.908782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.908907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.908939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.909121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.909153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.909375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.909407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.909518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.909550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.909814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.909846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.909996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.910028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.910209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.910241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.910408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.910440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.910630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.910662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.910847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.910878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.911067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.911100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.911213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.911245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.911421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.911452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.911567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.911598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.911728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.911761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.911961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.911993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.912118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.912149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.042 [2024-11-26 19:29:57.912382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.042 [2024-11-26 19:29:57.912415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.042 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.912602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.912634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.912787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.912820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.913012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.913043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.913255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.913287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.913473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.913505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.913690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.913723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.913916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.913948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.914143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.914174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.914442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.914473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.914645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.914686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.914877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.914909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.915092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.915123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.915303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.915334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.915523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.915554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.915691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.915737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.915867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.915900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.916078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.916114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.916329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.916360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.916532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.916565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.916736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.916769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.917048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.917080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.917252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.917284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.917472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.917504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.917706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.917738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.917870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.917902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.918093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.918125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.918385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.918416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.918704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.918737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.919021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.919052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.919235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.919266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.919535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.919567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.919788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.919822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.919956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.919988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.920216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.920247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.920429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.920460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.920637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.920678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.920848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.920880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.921065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.043 [2024-11-26 19:29:57.921097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.043 qpair failed and we were unable to recover it. 00:28:35.043 [2024-11-26 19:29:57.921207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.921238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.921511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.921543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.921753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.921786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.922017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.922049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.922234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.922265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.922551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.922598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.922896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.923070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.923101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.923280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.923311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.923500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.923531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.923648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.923689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.923892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.923922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.924112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.924143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.924413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.924444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.924626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.924657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.924882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.924914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.925045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.925075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.925360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.925391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.925528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.925560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.925805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.925838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.926018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.926048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.926238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.926269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.926388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.926417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.926680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.926714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.926901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.926932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.927167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.927198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.927376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.927407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.927644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.927683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.927864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.927894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.928075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.928105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.928291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.928322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.928491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.928522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 A controller has encountered a failure and is being reset. 00:28:35.044 [2024-11-26 19:29:57.928769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.928804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.928996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.929027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.929212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.929242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.929359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.929390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.929644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.929684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.929891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.044 [2024-11-26 19:29:57.929923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.044 qpair failed and we were unable to recover it. 00:28:35.044 [2024-11-26 19:29:57.930161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.930192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.930410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.930441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.930707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.930740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.930922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.930953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.931187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.931218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.931404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.931435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.931690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.931722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.931956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.931993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.932176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.932207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.932495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.932526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.932763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.932795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.933055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.933087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.933275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.933306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.933542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.933573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.933769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.933801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.933971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.934003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.934180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.934211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.934393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.934424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.934699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.934733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.934900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.934931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.935119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.935150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.935324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.935356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.935552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.935583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.935817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.935851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.936046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.936077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.936309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.936340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.936443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.936475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.936735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.936768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.936959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.936990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.937248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.937279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.937467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.937498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.937608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.937639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.937924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.937990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.938243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.938278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.045 qpair failed and we were unable to recover it. 00:28:35.045 [2024-11-26 19:29:57.938567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.045 [2024-11-26 19:29:57.938599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.938788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.938821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.939004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.939035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.939271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.939302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.939472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.939503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.939693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.939726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.939963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.939995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.940229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.940259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.940502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.940533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.940793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.940826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.940999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.941029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.941200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.941231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.941503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.941533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.941792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.941825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.942011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.942043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.942322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.942353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.942521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.942552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.942790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.942822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.943061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.943091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.943281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.943313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.943494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.943524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.943729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.943761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.944005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.944036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.944215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.944246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.944506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.944536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.944792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.944824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.944944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.944975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.945179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.945217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.945487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.945520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.945776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.945809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.946067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.946098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.946265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.946296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.946584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.946616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.946876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.946909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.947025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.947056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.947224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.947256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.947437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.947468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.947702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.947735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.948022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.046 [2024-11-26 19:29:57.948054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.046 qpair failed and we were unable to recover it. 00:28:35.046 [2024-11-26 19:29:57.948236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.948267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.948571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.948613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.948886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.948919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.949180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.949211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.949393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.949424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.949612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.949643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.949817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.949886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.950109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.950143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.950341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.950371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.950621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.950652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.950935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.950967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.951243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.951274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.951558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.951589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.951715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.951749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.952010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.952042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.952257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.952288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.952523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.952554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.952813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.952845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.952980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.953011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.953272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.953303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.953588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.953619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.953756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.953788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.953992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.954024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.954257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.954288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.954465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.954497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.954758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.954792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.955031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.955062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.955239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.955270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8318000b90 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.955457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.955492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.955686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.955720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.955910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.955940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.956143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.956173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.956435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.956466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.956657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.956694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.956886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.956917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.957151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.957182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.957370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.957400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.957582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.047 [2024-11-26 19:29:57.957613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.047 qpair failed and we were unable to recover it. 00:28:35.047 [2024-11-26 19:29:57.957841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.957874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.958054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.958085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.958271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.958302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.958432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.958462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.958637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.958680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.958937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.958969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.959255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.959286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.959561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.959592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.959807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.959839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.959963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.959993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.960185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.960216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.960484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.960514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.960721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.960752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.961001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.961031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.961209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.961239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.961454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.961484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.961662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.961702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.961988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.962024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.962281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.962312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.962607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.962638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.962894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.962929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.963128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.963160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.963335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.963367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.963554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.963586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.963821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.963855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.964141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.964173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.964362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.964393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.964631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.964662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.964879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.964911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.965176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.965207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.965454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.965485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.965686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.965720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.965889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.965921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.966101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.966132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.966392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.966423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.966543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.966575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.966836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.966868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.967039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.967070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.048 [2024-11-26 19:29:57.967262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.048 [2024-11-26 19:29:57.967294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.048 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.967416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.967448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.967707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.967740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.967921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.967953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.968187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.968219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.968479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.968510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.968754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.968787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.968991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.969022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.969283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.969315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.969564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.969595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.969804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.969837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.970110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.970141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.970364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.970396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.970510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.970542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.970740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.970773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.970952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.970983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.971217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.971249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.971425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.971457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.971635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.971667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.971891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.971929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.972213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.972243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.972459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.972490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.972726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.972760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.972996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.973027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.973267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.973299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.973533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.973564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.973810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.973843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.974057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.974088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.974265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.974295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.974560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.974591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.974876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.974909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.975119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.975150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.975383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.975414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.049 [2024-11-26 19:29:57.975655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.049 [2024-11-26 19:29:57.975695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.049 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.975894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.975925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.976160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.976192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.976366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.976398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.976528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.976559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.976780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.976813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.977051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.977083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.977203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.977235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.977435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.977467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.977728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.977761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.977932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.977963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.978226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.978257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.978535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.978567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8320000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.978784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.978822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.979062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.979094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.979282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.979313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.979552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.979583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.979766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.979798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.980081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.980111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.980296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.980327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.980460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.980491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.980660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.980699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.980884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.980916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.981102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.981133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.981306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.981337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.981517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.981548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.981752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.981784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.982061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.982092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.982347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.982378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.982519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.982550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.982811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.982844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.983033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.983063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.983297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.983329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.983613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.983643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c49be0 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.983942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.983986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.984268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.984300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.984572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.984604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.984877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.984912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.985194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.050 [2024-11-26 19:29:57.985225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.050 qpair failed and we were unable to recover it. 00:28:35.050 [2024-11-26 19:29:57.985506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.051 [2024-11-26 19:29:57.985537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8314000b90 with addr=10.0.0.2, port=4420 00:28:35.051 qpair failed and we were unable to recover it. 00:28:35.051 [2024-11-26 19:29:57.985809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.051 [2024-11-26 19:29:57.985858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c57b20 with addr=10.0.0.2, port=4420 00:28:35.051 [2024-11-26 19:29:57.985882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c57b20 is same with the state(6) to be set 00:28:35.051 [2024-11-26 19:29:57.985915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c57b20 (9): Bad file descriptor 00:28:35.051 [2024-11-26 19:29:57.985942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:35.051 [2024-11-26 19:29:57.985962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:35.051 [2024-11-26 19:29:57.985991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:35.051 Unable to reset the controller. 00:28:35.309 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.309 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:35.309 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.309 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.309 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 Malloc0 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 [2024-11-26 19:29:58.462812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 [2024-11-26 19:29:58.487777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.567 19:29:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3905809 00:28:36.133 Controller properly reset. 00:28:41.394 Initializing NVMe Controllers 00:28:41.394 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:41.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:41.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:41.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:41.394 Initialization complete. Launching workers. 00:28:41.394 Starting thread on core 1 00:28:41.394 Starting thread on core 2 00:28:41.394 Starting thread on core 3 00:28:41.394 Starting thread on core 0 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:41.394 00:28:41.394 real 0m11.335s 00:28:41.394 user 0m36.984s 00:28:41.394 sys 0m6.229s 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.394 ************************************ 00:28:41.394 END TEST nvmf_target_disconnect_tc2 00:28:41.394 ************************************ 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.394 rmmod nvme_tcp 00:28:41.394 rmmod nvme_fabrics 00:28:41.394 rmmod nvme_keyring 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:41.394 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3906341 ']' 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3906341 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3906341 ']' 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3906341 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3906341 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3906341' 00:28:41.395 killing process with pid 3906341 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3906341 00:28:41.395 19:30:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3906341 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.395 19:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.303 19:30:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.303 00:28:43.303 real 0m20.141s 00:28:43.303 user 1m4.189s 00:28:43.303 sys 0m11.378s 00:28:43.303 19:30:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.303 19:30:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:43.303 ************************************ 00:28:43.303 END TEST nvmf_target_disconnect 00:28:43.303 ************************************ 00:28:43.303 19:30:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:43.303 00:28:43.303 real 5m55.465s 00:28:43.303 user 10m53.836s 00:28:43.303 sys 2m0.025s 00:28:43.303 19:30:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.303 19:30:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.303 ************************************ 00:28:43.303 END TEST nvmf_host 00:28:43.303 ************************************ 00:28:43.303 19:30:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:43.303 19:30:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:43.303 19:30:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:43.303 19:30:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:43.303 19:30:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.304 19:30:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.304 ************************************ 00:28:43.304 START TEST nvmf_target_core_interrupt_mode 00:28:43.304 ************************************ 00:28:43.304 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:43.304 * Looking for test storage... 00:28:43.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:43.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.564 --rc genhtml_branch_coverage=1 00:28:43.564 --rc genhtml_function_coverage=1 00:28:43.564 --rc genhtml_legend=1 00:28:43.564 --rc geninfo_all_blocks=1 00:28:43.564 --rc geninfo_unexecuted_blocks=1 00:28:43.564 00:28:43.564 ' 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:43.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.564 --rc genhtml_branch_coverage=1 00:28:43.564 --rc genhtml_function_coverage=1 00:28:43.564 --rc genhtml_legend=1 00:28:43.564 --rc geninfo_all_blocks=1 00:28:43.564 --rc geninfo_unexecuted_blocks=1 00:28:43.564 00:28:43.564 ' 00:28:43.564 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:43.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.565 --rc genhtml_branch_coverage=1 00:28:43.565 --rc genhtml_function_coverage=1 00:28:43.565 --rc genhtml_legend=1 00:28:43.565 --rc geninfo_all_blocks=1 00:28:43.565 --rc geninfo_unexecuted_blocks=1 00:28:43.565 00:28:43.565 ' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:43.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.565 --rc genhtml_branch_coverage=1 00:28:43.565 --rc genhtml_function_coverage=1 00:28:43.565 --rc genhtml_legend=1 00:28:43.565 --rc geninfo_all_blocks=1 00:28:43.565 --rc geninfo_unexecuted_blocks=1 00:28:43.565 00:28:43.565 ' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.565 ************************************ 00:28:43.565 START TEST nvmf_abort 00:28:43.565 ************************************ 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:43.565 * Looking for test storage... 00:28:43.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:28:43.565 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.826 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.827 --rc genhtml_branch_coverage=1 00:28:43.827 --rc genhtml_function_coverage=1 00:28:43.827 --rc genhtml_legend=1 00:28:43.827 --rc geninfo_all_blocks=1 00:28:43.827 --rc geninfo_unexecuted_blocks=1 00:28:43.827 00:28:43.827 ' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.827 --rc genhtml_branch_coverage=1 00:28:43.827 --rc genhtml_function_coverage=1 00:28:43.827 --rc genhtml_legend=1 00:28:43.827 --rc geninfo_all_blocks=1 00:28:43.827 --rc geninfo_unexecuted_blocks=1 00:28:43.827 00:28:43.827 ' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.827 --rc genhtml_branch_coverage=1 00:28:43.827 --rc genhtml_function_coverage=1 00:28:43.827 --rc genhtml_legend=1 00:28:43.827 --rc geninfo_all_blocks=1 00:28:43.827 --rc geninfo_unexecuted_blocks=1 00:28:43.827 00:28:43.827 ' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.827 --rc genhtml_branch_coverage=1 00:28:43.827 --rc genhtml_function_coverage=1 00:28:43.827 --rc genhtml_legend=1 00:28:43.827 --rc geninfo_all_blocks=1 00:28:43.827 --rc geninfo_unexecuted_blocks=1 00:28:43.827 00:28:43.827 ' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.827 19:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:50.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:50.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:50.399 Found net devices under 0000:86:00.0: cvl_0_0 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.399 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:50.400 Found net devices under 0000:86:00.1: cvl_0_1 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:28:50.400 00:28:50.400 --- 10.0.0.2 ping statistics --- 00:28:50.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.400 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:28:50.400 00:28:50.400 --- 10.0.0.1 ping statistics --- 00:28:50.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.400 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3911595 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3911595 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3911595 ']' 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.400 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.400 [2024-11-26 19:30:12.745553] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:50.400 [2024-11-26 19:30:12.746453] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:28:50.400 [2024-11-26 19:30:12.746488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.400 [2024-11-26 19:30:12.825223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.400 [2024-11-26 19:30:12.866153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.400 [2024-11-26 19:30:12.866190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.400 [2024-11-26 19:30:12.866197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.400 [2024-11-26 19:30:12.866203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.400 [2024-11-26 19:30:12.866208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.400 [2024-11-26 19:30:12.867518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.400 [2024-11-26 19:30:12.867625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.400 [2024-11-26 19:30:12.867626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.400 [2024-11-26 19:30:12.935240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:50.400 [2024-11-26 19:30:12.936211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:50.400 [2024-11-26 19:30:12.936508] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:50.401 [2024-11-26 19:30:12.936665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.401 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.401 [2024-11-26 19:30:13.000347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.401 Malloc0 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.401 Delay0 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.401 [2024-11-26 19:30:13.084288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.401 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:50.401 [2024-11-26 19:30:13.175224] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:52.298 Initializing NVMe Controllers 00:28:52.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:52.298 controller IO queue size 128 less than required 00:28:52.298 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:52.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:52.298 Initialization complete. Launching workers. 00:28:52.298 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37554 00:28:52.298 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37615, failed to submit 66 00:28:52.298 success 37554, unsuccessful 61, failed 0 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:52.298 rmmod nvme_tcp 00:28:52.298 rmmod nvme_fabrics 00:28:52.298 rmmod nvme_keyring 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3911595 ']' 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3911595 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3911595 ']' 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3911595 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3911595 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3911595' 00:28:52.298 killing process with pid 3911595 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3911595 00:28:52.298 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3911595 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.557 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.095 00:28:55.095 real 0m11.055s 00:28:55.095 user 0m10.070s 00:28:55.095 sys 0m5.760s 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:55.095 ************************************ 00:28:55.095 END TEST nvmf_abort 00:28:55.095 ************************************ 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:55.095 ************************************ 00:28:55.095 START TEST nvmf_ns_hotplug_stress 00:28:55.095 ************************************ 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:55.095 * Looking for test storage... 00:28:55.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:55.095 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:55.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.096 --rc genhtml_branch_coverage=1 00:28:55.096 --rc genhtml_function_coverage=1 00:28:55.096 --rc genhtml_legend=1 00:28:55.096 --rc geninfo_all_blocks=1 00:28:55.096 --rc geninfo_unexecuted_blocks=1 00:28:55.096 00:28:55.096 ' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:55.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.096 --rc genhtml_branch_coverage=1 00:28:55.096 --rc genhtml_function_coverage=1 00:28:55.096 --rc genhtml_legend=1 00:28:55.096 --rc geninfo_all_blocks=1 00:28:55.096 --rc geninfo_unexecuted_blocks=1 00:28:55.096 00:28:55.096 ' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:55.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.096 --rc genhtml_branch_coverage=1 00:28:55.096 --rc genhtml_function_coverage=1 00:28:55.096 --rc genhtml_legend=1 00:28:55.096 --rc geninfo_all_blocks=1 00:28:55.096 --rc geninfo_unexecuted_blocks=1 00:28:55.096 00:28:55.096 ' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:55.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.096 --rc genhtml_branch_coverage=1 00:28:55.096 --rc genhtml_function_coverage=1 00:28:55.096 --rc genhtml_legend=1 00:28:55.096 --rc geninfo_all_blocks=1 00:28:55.096 --rc geninfo_unexecuted_blocks=1 00:28:55.096 00:28:55.096 ' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:55.096 19:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:00.391 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:00.391 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:00.391 Found net devices under 0000:86:00.0: cvl_0_0 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.391 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:00.392 Found net devices under 0000:86:00.1: cvl_0_1 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.392 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:29:00.651 00:29:00.651 --- 10.0.0.2 ping statistics --- 00:29:00.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.651 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:29:00.651 00:29:00.651 --- 10.0.0.1 ping statistics --- 00:29:00.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.651 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.651 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.910 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:00.910 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3915499 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3915499 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3915499 ']' 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.911 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.911 [2024-11-26 19:30:23.829357] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:00.911 [2024-11-26 19:30:23.830289] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:29:00.911 [2024-11-26 19:30:23.830323] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.911 [2024-11-26 19:30:23.910760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.911 [2024-11-26 19:30:23.953357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.911 [2024-11-26 19:30:23.953392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.911 [2024-11-26 19:30:23.953399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.911 [2024-11-26 19:30:23.953404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.911 [2024-11-26 19:30:23.953409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.911 [2024-11-26 19:30:23.954821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.911 [2024-11-26 19:30:23.954928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.911 [2024-11-26 19:30:23.954928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.911 [2024-11-26 19:30:24.022851] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:01.170 [2024-11-26 19:30:24.023761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:01.170 [2024-11-26 19:30:24.024176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:01.170 [2024-11-26 19:30:24.024322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:01.170 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.170 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:01.170 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.170 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.170 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:01.170 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.170 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:01.170 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:01.170 [2024-11-26 19:30:24.259657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.428 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:01.428 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.686 [2024-11-26 19:30:24.656087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.687 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:01.945 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:02.204 Malloc0 00:29:02.204 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:02.204 Delay0 00:29:02.204 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.463 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:02.722 NULL1 00:29:02.722 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:02.980 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3915853 00:29:02.980 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:02.980 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:02.980 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.916 Read completed with error (sct=0, sc=11) 00:29:03.916 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.175 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:04.175 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:04.434 true 00:29:04.434 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:04.434 19:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.371 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.371 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:05.371 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:05.630 true 00:29:05.630 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:05.630 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.889 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.889 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:05.889 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:06.148 true 00:29:06.148 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:06.148 19:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.524 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.524 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:07.524 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:07.781 true 00:29:07.781 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:07.781 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.715 19:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.715 19:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:08.715 19:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:08.973 true 00:29:08.973 19:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:08.973 19:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.973 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.231 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:09.231 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:09.489 true 00:29:09.489 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:09.489 19:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.688 19:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.688 19:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:10.688 19:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:10.955 true 00:29:10.955 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:10.955 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.930 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.930 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:11.930 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:12.219 true 00:29:12.219 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:12.219 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.489 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.748 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:12.748 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:12.748 true 00:29:12.748 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:12.748 19:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.127 19:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.127 19:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:14.127 19:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:14.386 true 00:29:14.386 19:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:14.386 19:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.321 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.321 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:15.321 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:15.580 true 00:29:15.580 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:15.580 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.839 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.839 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:15.839 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:16.099 true 00:29:16.099 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:16.099 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.495 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.495 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:17.495 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:17.754 true 00:29:17.754 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:17.754 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.691 19:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.691 19:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:18.691 19:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:18.950 true 00:29:18.950 19:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:18.950 19:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.950 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.208 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:19.209 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:19.467 true 00:29:19.467 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:19.467 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.662 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.663 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:20.663 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:20.922 true 00:29:20.922 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:20.922 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.858 19:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.858 19:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:21.858 19:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:22.116 true 00:29:22.116 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:22.116 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.375 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.634 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:22.634 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:22.892 true 00:29:22.892 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:22.892 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.828 19:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.086 19:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:24.086 19:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:24.086 true 00:29:24.086 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:24.086 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.344 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.603 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:24.603 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:24.861 true 00:29:24.861 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:24.861 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.239 19:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.239 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:26.239 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:26.239 true 00:29:26.498 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:26.498 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.067 19:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.326 19:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:27.326 19:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:27.585 true 00:29:27.585 19:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:27.585 19:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.844 19:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.844 19:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:28.103 19:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:28.103 true 00:29:28.103 19:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:28.103 19:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.299 19:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.299 19:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:29.299 19:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:29.556 true 00:29:29.556 19:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:29.556 19:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.814 19:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.072 19:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:30.072 19:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:30.072 true 00:29:30.072 19:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:30.072 19:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:31.443 19:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:31.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:31.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:31.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:31.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:31.443 19:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:31.443 19:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:31.700 true 00:29:31.700 19:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:31.700 19:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.634 19:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.892 19:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:32.892 19:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:32.892 true 00:29:32.892 19:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:32.892 19:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.150 Initializing NVMe Controllers 00:29:33.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:33.150 Controller IO queue size 128, less than required. 00:29:33.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:33.150 Controller IO queue size 128, less than required. 00:29:33.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:33.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:33.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:33.150 Initialization complete. Launching workers. 00:29:33.150 ======================================================== 00:29:33.150 Latency(us) 00:29:33.150 Device Information : IOPS MiB/s Average min max 00:29:33.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2039.12 1.00 43204.30 2309.58 1034276.51 00:29:33.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18274.10 8.92 7004.37 1335.41 370381.04 00:29:33.150 ======================================================== 00:29:33.150 Total : 20313.22 9.92 10638.26 1335.41 1034276.51 00:29:33.150 00:29:33.150 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.408 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:33.408 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:33.408 true 00:29:33.667 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3915853 00:29:33.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3915853) - No such process 00:29:33.667 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3915853 00:29:33.667 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.667 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.926 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:33.926 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:33.926 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:33.926 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.926 19:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:34.185 null0 00:29:34.185 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.185 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.185 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:34.185 null1 00:29:34.185 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.185 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.185 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:34.444 null2 00:29:34.444 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.444 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.444 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:34.703 null3 00:29:34.703 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.703 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.703 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:34.962 null4 00:29:34.962 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.962 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.963 19:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:34.963 null5 00:29:34.963 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.963 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.963 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:35.222 null6 00:29:35.222 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:35.222 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:35.222 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:35.481 null7 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.481 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3921180 3921182 3921183 3921185 3921187 3921189 3921191 3921193 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.482 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.741 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.742 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.001 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.001 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.001 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.001 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.001 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.001 19:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.001 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.001 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.268 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:36.526 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.527 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:36.785 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.785 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.785 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.785 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.785 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.785 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.785 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.785 19:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.044 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.303 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.303 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:37.303 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:37.303 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:37.303 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:37.303 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.303 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.303 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.562 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:37.563 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.822 19:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:38.081 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.081 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:38.081 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:38.081 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:38.081 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:38.081 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:38.081 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:38.081 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.340 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:38.599 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.600 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:38.859 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:38.859 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:38.859 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.859 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:38.859 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:38.859 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:38.859 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:38.859 19:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.118 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:39.119 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.119 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.119 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:39.119 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.119 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.119 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:39.377 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:39.377 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:39.377 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:39.377 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:39.377 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:39.377 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.377 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:39.377 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.636 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.637 rmmod nvme_tcp 00:29:39.637 rmmod nvme_fabrics 00:29:39.637 rmmod nvme_keyring 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3915499 ']' 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3915499 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3915499 ']' 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3915499 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3915499 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3915499' 00:29:39.637 killing process with pid 3915499 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3915499 00:29:39.637 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3915499 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.895 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.896 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.896 19:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.799 19:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.799 00:29:41.799 real 0m47.186s 00:29:41.799 user 2m55.970s 00:29:41.799 sys 0m19.734s 00:29:41.799 19:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.799 19:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:41.799 ************************************ 00:29:41.799 END TEST nvmf_ns_hotplug_stress 00:29:41.799 ************************************ 00:29:42.058 19:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:42.058 19:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:42.058 19:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.058 19:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:42.058 ************************************ 00:29:42.058 START TEST nvmf_delete_subsystem 00:29:42.058 ************************************ 00:29:42.058 19:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:42.058 * Looking for test storage... 00:29:42.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:42.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.058 --rc genhtml_branch_coverage=1 00:29:42.058 --rc genhtml_function_coverage=1 00:29:42.058 --rc genhtml_legend=1 00:29:42.058 --rc geninfo_all_blocks=1 00:29:42.058 --rc geninfo_unexecuted_blocks=1 00:29:42.058 00:29:42.058 ' 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:42.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.058 --rc genhtml_branch_coverage=1 00:29:42.058 --rc genhtml_function_coverage=1 00:29:42.058 --rc genhtml_legend=1 00:29:42.058 --rc geninfo_all_blocks=1 00:29:42.058 --rc geninfo_unexecuted_blocks=1 00:29:42.058 00:29:42.058 ' 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:42.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.058 --rc genhtml_branch_coverage=1 00:29:42.058 --rc genhtml_function_coverage=1 00:29:42.058 --rc genhtml_legend=1 00:29:42.058 --rc geninfo_all_blocks=1 00:29:42.058 --rc geninfo_unexecuted_blocks=1 00:29:42.058 00:29:42.058 ' 00:29:42.058 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:42.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.059 --rc genhtml_branch_coverage=1 00:29:42.059 --rc genhtml_function_coverage=1 00:29:42.059 --rc genhtml_legend=1 00:29:42.059 --rc geninfo_all_blocks=1 00:29:42.059 --rc geninfo_unexecuted_blocks=1 00:29:42.059 00:29:42.059 ' 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.059 19:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.632 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.632 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.632 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.632 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.632 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:48.633 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:48.633 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:48.633 Found net devices under 0000:86:00.0: cvl_0_0 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:48.633 Found net devices under 0000:86:00.1: cvl_0_1 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.633 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.634 19:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:29:48.634 00:29:48.634 --- 10.0.0.2 ping statistics --- 00:29:48.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.634 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:29:48.634 00:29:48.634 --- 10.0.0.1 ping statistics --- 00:29:48.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.634 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3925480 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3925480 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3925480 ']' 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.634 [2024-11-26 19:31:11.130439] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:48.634 [2024-11-26 19:31:11.131357] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:29:48.634 [2024-11-26 19:31:11.131391] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.634 [2024-11-26 19:31:11.210042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:48.634 [2024-11-26 19:31:11.249087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.634 [2024-11-26 19:31:11.249124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.634 [2024-11-26 19:31:11.249131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.634 [2024-11-26 19:31:11.249137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.634 [2024-11-26 19:31:11.249142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.634 [2024-11-26 19:31:11.250347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.634 [2024-11-26 19:31:11.250348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.634 [2024-11-26 19:31:11.317479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.634 [2024-11-26 19:31:11.317907] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:48.634 [2024-11-26 19:31:11.318193] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.634 [2024-11-26 19:31:11.395161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.634 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.635 [2024-11-26 19:31:11.423438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.635 NULL1 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.635 Delay0 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3925575 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:48.635 19:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:48.635 [2024-11-26 19:31:11.535595] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:50.537 19:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.537 19:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.537 19:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 starting I/O failed: -6 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 starting I/O failed: -6 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 starting I/O failed: -6 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 starting I/O failed: -6 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 starting I/O failed: -6 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 starting I/O failed: -6 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 starting I/O failed: -6 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 starting I/O failed: -6 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Read completed with error (sct=0, sc=8) 00:29:50.793 Write completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 [2024-11-26 19:31:13.752003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aa860 is same with the state(6) to be set 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 [2024-11-26 19:31:13.752704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aa680 is same with the state(6) to be set 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 Read completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:50.794 Write completed with error (sct=0, sc=8) 00:29:50.794 starting I/O failed: -6 00:29:51.725 [2024-11-26 19:31:14.714574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ab9b0 is same with the state(6) to be set 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 [2024-11-26 19:31:14.755435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aa4a0 is same with the state(6) to be set 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 [2024-11-26 19:31:14.756564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44b000d7e0 is same with the state(6) to be set 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Write completed with error (sct=0, sc=8) 00:29:51.725 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 [2024-11-26 19:31:14.756713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44b0000c40 is same with the state(6) to be set 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Read completed with error (sct=0, sc=8) 00:29:51.726 Write completed with error (sct=0, sc=8) 00:29:51.726 [2024-11-26 19:31:14.757224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44b000d020 is same with the state(6) to be set 00:29:51.726 Initializing NVMe Controllers 00:29:51.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.726 Controller IO queue size 128, less than required. 00:29:51.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:51.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:51.726 Initialization complete. Launching workers. 00:29:51.726 ======================================================== 00:29:51.726 Latency(us) 00:29:51.726 Device Information : IOPS MiB/s Average min max 00:29:51.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 146.60 0.07 913396.76 227.68 1009409.23 00:29:51.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.47 0.08 1115351.90 333.90 2002617.95 00:29:51.726 ======================================================== 00:29:51.726 Total : 313.07 0.15 1020785.61 227.68 2002617.95 00:29:51.726 00:29:51.726 [2024-11-26 19:31:14.757779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ab9b0 (9): Bad file descriptor 00:29:51.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:51.726 19:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.726 19:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:51.726 19:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3925575 00:29:51.726 19:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3925575 00:29:52.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3925575) - No such process 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3925575 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3925575 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3925575 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:52.294 [2024-11-26 19:31:15.298782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3926139 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3926139 00:29:52.294 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.294 [2024-11-26 19:31:15.380976] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:52.860 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.860 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3926139 00:29:52.860 19:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:53.424 19:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:53.424 19:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3926139 00:29:53.424 19:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:53.989 19:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:53.989 19:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3926139 00:29:53.989 19:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:54.247 19:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:54.247 19:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3926139 00:29:54.247 19:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:54.812 19:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:54.812 19:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3926139 00:29:54.812 19:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:55.379 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:55.379 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3926139 00:29:55.379 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:55.638 Initializing NVMe Controllers 00:29:55.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.638 Controller IO queue size 128, less than required. 00:29:55.638 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:55.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:55.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:55.638 Initialization complete. Launching workers. 00:29:55.638 ======================================================== 00:29:55.638 Latency(us) 00:29:55.638 Device Information : IOPS MiB/s Average min max 00:29:55.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002945.48 1000155.86 1041438.04 00:29:55.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003924.31 1000168.62 1010969.78 00:29:55.638 ======================================================== 00:29:55.638 Total : 256.00 0.12 1003434.89 1000155.86 1041438.04 00:29:55.638 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3926139 00:29:55.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3926139) - No such process 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3926139 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.898 rmmod nvme_tcp 00:29:55.898 rmmod nvme_fabrics 00:29:55.898 rmmod nvme_keyring 00:29:55.898 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3925480 ']' 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3925480 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3925480 ']' 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3925480 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3925480 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3925480' 00:29:55.899 killing process with pid 3925480 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3925480 00:29:55.899 19:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3925480 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.158 19:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:58.700 00:29:58.700 real 0m16.258s 00:29:58.700 user 0m26.451s 00:29:58.700 sys 0m6.069s 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:58.700 ************************************ 00:29:58.700 END TEST nvmf_delete_subsystem 00:29:58.700 ************************************ 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:58.700 ************************************ 00:29:58.700 START TEST nvmf_host_management 00:29:58.700 ************************************ 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:58.700 * Looking for test storage... 00:29:58.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:58.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.700 --rc genhtml_branch_coverage=1 00:29:58.700 --rc genhtml_function_coverage=1 00:29:58.700 --rc genhtml_legend=1 00:29:58.700 --rc geninfo_all_blocks=1 00:29:58.700 --rc geninfo_unexecuted_blocks=1 00:29:58.700 00:29:58.700 ' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:58.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.700 --rc genhtml_branch_coverage=1 00:29:58.700 --rc genhtml_function_coverage=1 00:29:58.700 --rc genhtml_legend=1 00:29:58.700 --rc geninfo_all_blocks=1 00:29:58.700 --rc geninfo_unexecuted_blocks=1 00:29:58.700 00:29:58.700 ' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:58.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.700 --rc genhtml_branch_coverage=1 00:29:58.700 --rc genhtml_function_coverage=1 00:29:58.700 --rc genhtml_legend=1 00:29:58.700 --rc geninfo_all_blocks=1 00:29:58.700 --rc geninfo_unexecuted_blocks=1 00:29:58.700 00:29:58.700 ' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:58.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.700 --rc genhtml_branch_coverage=1 00:29:58.700 --rc genhtml_function_coverage=1 00:29:58.700 --rc genhtml_legend=1 00:29:58.700 --rc geninfo_all_blocks=1 00:29:58.700 --rc geninfo_unexecuted_blocks=1 00:29:58.700 00:29:58.700 ' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.700 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.701 19:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:03.978 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:03.978 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:03.978 Found net devices under 0000:86:00.0: cvl_0_0 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.978 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:03.979 Found net devices under 0000:86:00.1: cvl_0_1 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.979 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:30:04.238 00:30:04.238 --- 10.0.0.2 ping statistics --- 00:30:04.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.238 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:30:04.238 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:30:04.238 00:30:04.239 --- 10.0.0.1 ping statistics --- 00:30:04.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.239 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:04.239 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3930253 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3930253 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3930253 ']' 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.499 19:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.499 [2024-11-26 19:31:27.419795] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:04.499 [2024-11-26 19:31:27.420706] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:30:04.499 [2024-11-26 19:31:27.420737] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.499 [2024-11-26 19:31:27.501818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.499 [2024-11-26 19:31:27.542223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.499 [2024-11-26 19:31:27.542260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.499 [2024-11-26 19:31:27.542267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.499 [2024-11-26 19:31:27.542272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.499 [2024-11-26 19:31:27.542277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.499 [2024-11-26 19:31:27.543909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.499 [2024-11-26 19:31:27.544018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.499 [2024-11-26 19:31:27.544125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.499 [2024-11-26 19:31:27.544127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:04.759 [2024-11-26 19:31:27.613157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:04.759 [2024-11-26 19:31:27.614425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:04.759 [2024-11-26 19:31:27.614523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:04.759 [2024-11-26 19:31:27.615127] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:04.759 [2024-11-26 19:31:27.615157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:05.327 [2024-11-26 19:31:28.292829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:05.327 Malloc0 00:30:05.327 [2024-11-26 19:31:28.388913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3930415 00:30:05.327 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3930415 /var/tmp/bdevperf.sock 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3930415 ']' 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:05.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.586 { 00:30:05.586 "params": { 00:30:05.586 "name": "Nvme$subsystem", 00:30:05.586 "trtype": "$TEST_TRANSPORT", 00:30:05.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.586 "adrfam": "ipv4", 00:30:05.586 "trsvcid": "$NVMF_PORT", 00:30:05.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.586 "hdgst": ${hdgst:-false}, 00:30:05.586 "ddgst": ${ddgst:-false} 00:30:05.586 }, 00:30:05.586 "method": "bdev_nvme_attach_controller" 00:30:05.586 } 00:30:05.586 EOF 00:30:05.586 )") 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:05.586 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.586 "params": { 00:30:05.586 "name": "Nvme0", 00:30:05.586 "trtype": "tcp", 00:30:05.586 "traddr": "10.0.0.2", 00:30:05.586 "adrfam": "ipv4", 00:30:05.586 "trsvcid": "4420", 00:30:05.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:05.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:05.586 "hdgst": false, 00:30:05.586 "ddgst": false 00:30:05.586 }, 00:30:05.586 "method": "bdev_nvme_attach_controller" 00:30:05.586 }' 00:30:05.586 [2024-11-26 19:31:28.482967] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:30:05.586 [2024-11-26 19:31:28.483015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930415 ] 00:30:05.586 [2024-11-26 19:31:28.556551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.586 [2024-11-26 19:31:28.597911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.843 Running I/O for 10 seconds... 00:30:06.102 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.102 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:06.102 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:06.102 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.102 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:06.102 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:06.103 19:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=102 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 102 -ge 100 ']' 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:06.103 [2024-11-26 19:31:29.015497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.103 [2024-11-26 19:31:29.015537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.015547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.103 [2024-11-26 19:31:29.015554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.015563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.103 [2024-11-26 19:31:29.015570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.015577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.103 [2024-11-26 19:31:29.015584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.015591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6510 is same with the state(6) to be set 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.103 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:06.103 [2024-11-26 19:31:29.023312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.103 [2024-11-26 19:31:29.023684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.103 [2024-11-26 19:31:29.023693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.023993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.023999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.104 [2024-11-26 19:31:29.024255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.104 [2024-11-26 19:31:29.024261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.105 [2024-11-26 19:31:29.024269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.105 [2024-11-26 19:31:29.024275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.105 [2024-11-26 19:31:29.025197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:06.105 task offset: 24576 on job bdev=Nvme0n1 fails 00:30:06.105 00:30:06.105 Latency(us) 00:30:06.105 [2024-11-26T18:31:29.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.105 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.105 Job: Nvme0n1 ended in about 0.11 seconds with error 00:30:06.105 Verification LBA range: start 0x0 length 0x400 00:30:06.105 Nvme0n1 : 0.11 1757.89 109.87 585.96 0.00 25185.11 1287.31 27213.04 00:30:06.105 [2024-11-26T18:31:29.219Z] =================================================================================================================== 00:30:06.105 [2024-11-26T18:31:29.219Z] Total : 1757.89 109.87 585.96 0.00 25185.11 1287.31 27213.04 00:30:06.105 [2024-11-26 19:31:29.027632] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:06.105 [2024-11-26 19:31:29.027650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e6510 (9): Bad file descriptor 00:30:06.105 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.105 19:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:06.105 [2024-11-26 19:31:29.078838] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3930415 00:30:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3930415) - No such process 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.035 { 00:30:07.035 "params": { 00:30:07.035 "name": "Nvme$subsystem", 00:30:07.035 "trtype": "$TEST_TRANSPORT", 00:30:07.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.035 "adrfam": "ipv4", 00:30:07.035 "trsvcid": "$NVMF_PORT", 00:30:07.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.035 "hdgst": ${hdgst:-false}, 00:30:07.035 "ddgst": ${ddgst:-false} 00:30:07.035 }, 00:30:07.035 "method": "bdev_nvme_attach_controller" 00:30:07.035 } 00:30:07.035 EOF 00:30:07.035 )") 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:07.035 19:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:07.035 "params": { 00:30:07.035 "name": "Nvme0", 00:30:07.035 "trtype": "tcp", 00:30:07.035 "traddr": "10.0.0.2", 00:30:07.035 "adrfam": "ipv4", 00:30:07.035 "trsvcid": "4420", 00:30:07.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:07.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:07.035 "hdgst": false, 00:30:07.035 "ddgst": false 00:30:07.035 }, 00:30:07.035 "method": "bdev_nvme_attach_controller" 00:30:07.035 }' 00:30:07.035 [2024-11-26 19:31:30.085256] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:30:07.035 [2024-11-26 19:31:30.085306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930739 ] 00:30:07.292 [2024-11-26 19:31:30.161576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.292 [2024-11-26 19:31:30.202530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.549 Running I/O for 1 seconds... 00:30:08.479 1984.00 IOPS, 124.00 MiB/s 00:30:08.479 Latency(us) 00:30:08.479 [2024-11-26T18:31:31.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.479 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.479 Verification LBA range: start 0x0 length 0x400 00:30:08.479 Nvme0n1 : 1.00 2041.40 127.59 0.00 0.00 30861.03 7864.32 26838.55 00:30:08.479 [2024-11-26T18:31:31.593Z] =================================================================================================================== 00:30:08.479 [2024-11-26T18:31:31.593Z] Total : 2041.40 127.59 0.00 0.00 30861.03 7864.32 26838.55 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.737 rmmod nvme_tcp 00:30:08.737 rmmod nvme_fabrics 00:30:08.737 rmmod nvme_keyring 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3930253 ']' 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3930253 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3930253 ']' 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3930253 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930253 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930253' 00:30:08.737 killing process with pid 3930253 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3930253 00:30:08.737 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3930253 00:30:08.998 [2024-11-26 19:31:31.972068] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:08.998 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.998 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.998 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.998 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:08.998 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:08.998 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.998 19:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.998 19:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.998 19:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.998 19:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.998 19:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.998 19:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:11.533 00:30:11.533 real 0m12.780s 00:30:11.533 user 0m17.478s 00:30:11.533 sys 0m6.214s 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.533 ************************************ 00:30:11.533 END TEST nvmf_host_management 00:30:11.533 ************************************ 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:11.533 ************************************ 00:30:11.533 START TEST nvmf_lvol 00:30:11.533 ************************************ 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:11.533 * Looking for test storage... 00:30:11.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:11.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.533 --rc genhtml_branch_coverage=1 00:30:11.533 --rc genhtml_function_coverage=1 00:30:11.533 --rc genhtml_legend=1 00:30:11.533 --rc geninfo_all_blocks=1 00:30:11.533 --rc geninfo_unexecuted_blocks=1 00:30:11.533 00:30:11.533 ' 00:30:11.533 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:11.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.533 --rc genhtml_branch_coverage=1 00:30:11.533 --rc genhtml_function_coverage=1 00:30:11.533 --rc genhtml_legend=1 00:30:11.534 --rc geninfo_all_blocks=1 00:30:11.534 --rc geninfo_unexecuted_blocks=1 00:30:11.534 00:30:11.534 ' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:11.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.534 --rc genhtml_branch_coverage=1 00:30:11.534 --rc genhtml_function_coverage=1 00:30:11.534 --rc genhtml_legend=1 00:30:11.534 --rc geninfo_all_blocks=1 00:30:11.534 --rc geninfo_unexecuted_blocks=1 00:30:11.534 00:30:11.534 ' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:11.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.534 --rc genhtml_branch_coverage=1 00:30:11.534 --rc genhtml_function_coverage=1 00:30:11.534 --rc genhtml_legend=1 00:30:11.534 --rc geninfo_all_blocks=1 00:30:11.534 --rc geninfo_unexecuted_blocks=1 00:30:11.534 00:30:11.534 ' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:11.534 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:11.535 19:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:18.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:18.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.110 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:18.111 Found net devices under 0000:86:00.0: cvl_0_0 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:18.111 Found net devices under 0000:86:00.1: cvl_0_1 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.111 19:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:30:18.111 00:30:18.111 --- 10.0.0.2 ping statistics --- 00:30:18.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.111 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:30:18.111 00:30:18.111 --- 10.0.0.1 ping statistics --- 00:30:18.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.111 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3934496 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3934496 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3934496 ']' 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.111 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:18.112 [2024-11-26 19:31:40.285865] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:18.112 [2024-11-26 19:31:40.286792] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:30:18.112 [2024-11-26 19:31:40.286826] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.112 [2024-11-26 19:31:40.363691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:18.112 [2024-11-26 19:31:40.405348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.112 [2024-11-26 19:31:40.405385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.112 [2024-11-26 19:31:40.405391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.112 [2024-11-26 19:31:40.405397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.112 [2024-11-26 19:31:40.405403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.112 [2024-11-26 19:31:40.406791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.112 [2024-11-26 19:31:40.406816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.112 [2024-11-26 19:31:40.406817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.112 [2024-11-26 19:31:40.475172] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:18.112 [2024-11-26 19:31:40.475948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:18.112 [2024-11-26 19:31:40.476023] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:18.112 [2024-11-26 19:31:40.476153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:18.112 [2024-11-26 19:31:40.715626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:18.112 19:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:18.112 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:18.112 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:18.371 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:18.631 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=47d6207f-5e68-4526-9712-46804921151a 00:30:18.631 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47d6207f-5e68-4526-9712-46804921151a lvol 20 00:30:18.890 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d757cb1f-24b9-4c8c-b896-9ce894ae021b 00:30:18.890 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:18.890 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d757cb1f-24b9-4c8c-b896-9ce894ae021b 00:30:19.148 19:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.406 [2024-11-26 19:31:42.267509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.406 19:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.406 19:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3934795 00:30:19.406 19:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:19.406 19:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:20.778 19:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d757cb1f-24b9-4c8c-b896-9ce894ae021b MY_SNAPSHOT 00:30:20.778 19:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=11ce30de-0e64-42a2-bac1-6e981bba5e9d 00:30:20.778 19:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d757cb1f-24b9-4c8c-b896-9ce894ae021b 30 00:30:21.035 19:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 11ce30de-0e64-42a2-bac1-6e981bba5e9d MY_CLONE 00:30:21.293 19:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0532709a-056d-494b-9d59-90586cbefdc5 00:30:21.293 19:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0532709a-056d-494b-9d59-90586cbefdc5 00:30:21.858 19:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3934795 00:30:29.966 Initializing NVMe Controllers 00:30:29.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:29.966 Controller IO queue size 128, less than required. 00:30:29.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:29.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:29.966 Initialization complete. Launching workers. 00:30:29.966 ======================================================== 00:30:29.966 Latency(us) 00:30:29.966 Device Information : IOPS MiB/s Average min max 00:30:29.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12438.50 48.59 10294.07 3739.64 57335.78 00:30:29.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12557.40 49.05 10196.75 4827.58 55252.48 00:30:29.966 ======================================================== 00:30:29.966 Total : 24995.90 97.64 10245.18 3739.64 57335.78 00:30:29.966 00:30:29.966 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:30.225 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d757cb1f-24b9-4c8c-b896-9ce894ae021b 00:30:30.225 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47d6207f-5e68-4526-9712-46804921151a 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:30.485 rmmod nvme_tcp 00:30:30.485 rmmod nvme_fabrics 00:30:30.485 rmmod nvme_keyring 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3934496 ']' 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3934496 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3934496 ']' 00:30:30.485 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3934496 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3934496 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3934496' 00:30:30.744 killing process with pid 3934496 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3934496 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3934496 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:30.744 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:31.003 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.003 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.003 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.003 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.003 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.003 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.003 19:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.908 19:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.908 00:30:32.908 real 0m21.781s 00:30:32.908 user 0m55.607s 00:30:32.908 sys 0m9.854s 00:30:32.908 19:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.908 19:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:32.908 ************************************ 00:30:32.908 END TEST nvmf_lvol 00:30:32.908 ************************************ 00:30:32.908 19:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:32.908 19:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.908 19:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.908 19:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:32.908 ************************************ 00:30:32.908 START TEST nvmf_lvs_grow 00:30:32.908 ************************************ 00:30:32.908 19:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:33.168 * Looking for test storage... 00:30:33.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:33.168 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:33.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.169 --rc genhtml_branch_coverage=1 00:30:33.169 --rc genhtml_function_coverage=1 00:30:33.169 --rc genhtml_legend=1 00:30:33.169 --rc geninfo_all_blocks=1 00:30:33.169 --rc geninfo_unexecuted_blocks=1 00:30:33.169 00:30:33.169 ' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:33.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.169 --rc genhtml_branch_coverage=1 00:30:33.169 --rc genhtml_function_coverage=1 00:30:33.169 --rc genhtml_legend=1 00:30:33.169 --rc geninfo_all_blocks=1 00:30:33.169 --rc geninfo_unexecuted_blocks=1 00:30:33.169 00:30:33.169 ' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:33.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.169 --rc genhtml_branch_coverage=1 00:30:33.169 --rc genhtml_function_coverage=1 00:30:33.169 --rc genhtml_legend=1 00:30:33.169 --rc geninfo_all_blocks=1 00:30:33.169 --rc geninfo_unexecuted_blocks=1 00:30:33.169 00:30:33.169 ' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:33.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.169 --rc genhtml_branch_coverage=1 00:30:33.169 --rc genhtml_function_coverage=1 00:30:33.169 --rc genhtml_legend=1 00:30:33.169 --rc geninfo_all_blocks=1 00:30:33.169 --rc geninfo_unexecuted_blocks=1 00:30:33.169 00:30:33.169 ' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:33.169 19:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:39.860 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:39.860 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:39.860 Found net devices under 0000:86:00.0: cvl_0_0 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:39.860 Found net devices under 0000:86:00.1: cvl_0_1 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:39.860 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:39.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:30:39.860 00:30:39.860 --- 10.0.0.2 ping statistics --- 00:30:39.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.860 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:30:39.860 00:30:39.860 --- 10.0.0.1 ping statistics --- 00:30:39.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.860 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:39.860 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3940150 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3940150 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3940150 ']' 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.861 [2024-11-26 19:32:02.169448] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:39.861 [2024-11-26 19:32:02.170341] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:30:39.861 [2024-11-26 19:32:02.170374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.861 [2024-11-26 19:32:02.249191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.861 [2024-11-26 19:32:02.289674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.861 [2024-11-26 19:32:02.289710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.861 [2024-11-26 19:32:02.289717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.861 [2024-11-26 19:32:02.289723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.861 [2024-11-26 19:32:02.289727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.861 [2024-11-26 19:32:02.290258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.861 [2024-11-26 19:32:02.357302] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:39.861 [2024-11-26 19:32:02.357502] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:39.861 [2024-11-26 19:32:02.586920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.861 ************************************ 00:30:39.861 START TEST lvs_grow_clean 00:30:39.861 ************************************ 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:39.861 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:40.123 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:40.123 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:40.123 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:40.434 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:40.434 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:40.434 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a lvol 150 00:30:40.434 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=daed38a7-f1fc-4136-9e13-b5b501121456 00:30:40.434 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:40.434 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:40.693 [2024-11-26 19:32:03.638758] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:40.693 [2024-11-26 19:32:03.638893] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:40.693 true 00:30:40.693 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:40.693 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:40.953 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:40.953 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:40.953 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 daed38a7-f1fc-4136-9e13-b5b501121456 00:30:41.212 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.470 [2024-11-26 19:32:04.403145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.470 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3940542 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3940542 /var/tmp/bdevperf.sock 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3940542 ']' 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:41.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.730 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:41.730 [2024-11-26 19:32:04.646031] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:30:41.730 [2024-11-26 19:32:04.646083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940542 ] 00:30:41.730 [2024-11-26 19:32:04.719647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.730 [2024-11-26 19:32:04.762650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.990 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.990 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:41.990 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:42.249 Nvme0n1 00:30:42.249 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:42.507 [ 00:30:42.507 { 00:30:42.507 "name": "Nvme0n1", 00:30:42.507 "aliases": [ 00:30:42.507 "daed38a7-f1fc-4136-9e13-b5b501121456" 00:30:42.507 ], 00:30:42.507 "product_name": "NVMe disk", 00:30:42.507 "block_size": 4096, 00:30:42.507 "num_blocks": 38912, 00:30:42.507 "uuid": "daed38a7-f1fc-4136-9e13-b5b501121456", 00:30:42.507 "numa_id": 1, 00:30:42.507 "assigned_rate_limits": { 00:30:42.507 "rw_ios_per_sec": 0, 00:30:42.507 "rw_mbytes_per_sec": 0, 00:30:42.507 "r_mbytes_per_sec": 0, 00:30:42.507 "w_mbytes_per_sec": 0 00:30:42.507 }, 00:30:42.507 "claimed": false, 00:30:42.507 "zoned": false, 00:30:42.507 "supported_io_types": { 00:30:42.507 "read": true, 00:30:42.507 "write": true, 00:30:42.507 "unmap": true, 00:30:42.507 "flush": true, 00:30:42.507 "reset": true, 00:30:42.507 "nvme_admin": true, 00:30:42.507 "nvme_io": true, 00:30:42.507 "nvme_io_md": false, 00:30:42.507 "write_zeroes": true, 00:30:42.507 "zcopy": false, 00:30:42.507 "get_zone_info": false, 00:30:42.507 "zone_management": false, 00:30:42.507 "zone_append": false, 00:30:42.507 "compare": true, 00:30:42.507 "compare_and_write": true, 00:30:42.507 "abort": true, 00:30:42.507 "seek_hole": false, 00:30:42.507 "seek_data": false, 00:30:42.507 "copy": true, 00:30:42.507 "nvme_iov_md": false 00:30:42.507 }, 00:30:42.507 "memory_domains": [ 00:30:42.507 { 00:30:42.507 "dma_device_id": "system", 00:30:42.507 "dma_device_type": 1 00:30:42.507 } 00:30:42.507 ], 00:30:42.507 "driver_specific": { 00:30:42.507 "nvme": [ 00:30:42.507 { 00:30:42.507 "trid": { 00:30:42.507 "trtype": "TCP", 00:30:42.507 "adrfam": "IPv4", 00:30:42.507 "traddr": "10.0.0.2", 00:30:42.507 "trsvcid": "4420", 00:30:42.507 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:42.507 }, 00:30:42.507 "ctrlr_data": { 00:30:42.507 "cntlid": 1, 00:30:42.507 "vendor_id": "0x8086", 00:30:42.507 "model_number": "SPDK bdev Controller", 00:30:42.507 "serial_number": "SPDK0", 00:30:42.507 "firmware_revision": "25.01", 00:30:42.507 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.507 "oacs": { 00:30:42.507 "security": 0, 00:30:42.507 "format": 0, 00:30:42.507 "firmware": 0, 00:30:42.507 "ns_manage": 0 00:30:42.507 }, 00:30:42.507 "multi_ctrlr": true, 00:30:42.507 "ana_reporting": false 00:30:42.507 }, 00:30:42.508 "vs": { 00:30:42.508 "nvme_version": "1.3" 00:30:42.508 }, 00:30:42.508 "ns_data": { 00:30:42.508 "id": 1, 00:30:42.508 "can_share": true 00:30:42.508 } 00:30:42.508 } 00:30:42.508 ], 00:30:42.508 "mp_policy": "active_passive" 00:30:42.508 } 00:30:42.508 } 00:30:42.508 ] 00:30:42.508 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3940667 00:30:42.508 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:42.508 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:42.508 Running I/O for 10 seconds... 00:30:43.443 Latency(us) 00:30:43.443 [2024-11-26T18:32:06.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.443 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:43.443 [2024-11-26T18:32:06.557Z] =================================================================================================================== 00:30:43.443 [2024-11-26T18:32:06.557Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:43.443 00:30:44.380 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:44.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.638 Nvme0n1 : 2.00 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:30:44.638 [2024-11-26T18:32:07.752Z] =================================================================================================================== 00:30:44.638 [2024-11-26T18:32:07.752Z] Total : 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:30:44.638 00:30:44.638 true 00:30:44.638 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:44.638 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:44.896 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:44.896 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:44.896 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3940667 00:30:45.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.464 Nvme0n1 : 3.00 23167.67 90.50 0.00 0.00 0.00 0.00 0.00 00:30:45.464 [2024-11-26T18:32:08.578Z] =================================================================================================================== 00:30:45.464 [2024-11-26T18:32:08.578Z] Total : 23167.67 90.50 0.00 0.00 0.00 0.00 0.00 00:30:45.464 00:30:46.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.841 Nvme0n1 : 4.00 23281.25 90.94 0.00 0.00 0.00 0.00 0.00 00:30:46.841 [2024-11-26T18:32:09.955Z] =================================================================================================================== 00:30:46.841 [2024-11-26T18:32:09.955Z] Total : 23281.25 90.94 0.00 0.00 0.00 0.00 0.00 00:30:46.841 00:30:47.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.777 Nvme0n1 : 5.00 23273.20 90.91 0.00 0.00 0.00 0.00 0.00 00:30:47.777 [2024-11-26T18:32:10.891Z] =================================================================================================================== 00:30:47.777 [2024-11-26T18:32:10.891Z] Total : 23273.20 90.91 0.00 0.00 0.00 0.00 0.00 00:30:47.777 00:30:48.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.715 Nvme0n1 : 6.00 23331.33 91.14 0.00 0.00 0.00 0.00 0.00 00:30:48.715 [2024-11-26T18:32:11.829Z] =================================================================================================================== 00:30:48.715 [2024-11-26T18:32:11.829Z] Total : 23331.33 91.14 0.00 0.00 0.00 0.00 0.00 00:30:48.715 00:30:49.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.652 Nvme0n1 : 7.00 23372.86 91.30 0.00 0.00 0.00 0.00 0.00 00:30:49.652 [2024-11-26T18:32:12.767Z] =================================================================================================================== 00:30:49.653 [2024-11-26T18:32:12.767Z] Total : 23372.86 91.30 0.00 0.00 0.00 0.00 0.00 00:30:49.653 00:30:50.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.589 Nvme0n1 : 8.00 23404.00 91.42 0.00 0.00 0.00 0.00 0.00 00:30:50.589 [2024-11-26T18:32:13.703Z] =================================================================================================================== 00:30:50.589 [2024-11-26T18:32:13.703Z] Total : 23404.00 91.42 0.00 0.00 0.00 0.00 0.00 00:30:50.589 00:30:51.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.527 Nvme0n1 : 9.00 23442.33 91.57 0.00 0.00 0.00 0.00 0.00 00:30:51.527 [2024-11-26T18:32:14.641Z] =================================================================================================================== 00:30:51.527 [2024-11-26T18:32:14.641Z] Total : 23442.33 91.57 0.00 0.00 0.00 0.00 0.00 00:30:51.527 00:30:52.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.464 Nvme0n1 : 10.00 23460.30 91.64 0.00 0.00 0.00 0.00 0.00 00:30:52.464 [2024-11-26T18:32:15.578Z] =================================================================================================================== 00:30:52.464 [2024-11-26T18:32:15.578Z] Total : 23460.30 91.64 0.00 0.00 0.00 0.00 0.00 00:30:52.464 00:30:52.464 00:30:52.464 Latency(us) 00:30:52.464 [2024-11-26T18:32:15.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.464 Nvme0n1 : 10.00 23467.09 91.67 0.00 0.00 5451.59 3245.59 27712.37 00:30:52.464 [2024-11-26T18:32:15.578Z] =================================================================================================================== 00:30:52.464 [2024-11-26T18:32:15.578Z] Total : 23467.09 91.67 0.00 0.00 5451.59 3245.59 27712.37 00:30:52.464 { 00:30:52.464 "results": [ 00:30:52.464 { 00:30:52.464 "job": "Nvme0n1", 00:30:52.464 "core_mask": "0x2", 00:30:52.464 "workload": "randwrite", 00:30:52.464 "status": "finished", 00:30:52.464 "queue_depth": 128, 00:30:52.464 "io_size": 4096, 00:30:52.464 "runtime": 10.002559, 00:30:52.464 "iops": 23467.094770448242, 00:30:52.464 "mibps": 91.66833894706345, 00:30:52.464 "io_failed": 0, 00:30:52.464 "io_timeout": 0, 00:30:52.464 "avg_latency_us": 5451.5886219666645, 00:30:52.464 "min_latency_us": 3245.592380952381, 00:30:52.464 "max_latency_us": 27712.365714285716 00:30:52.464 } 00:30:52.464 ], 00:30:52.465 "core_count": 1 00:30:52.465 } 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3940542 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3940542 ']' 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3940542 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3940542 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3940542' 00:30:52.723 killing process with pid 3940542 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3940542 00:30:52.723 Received shutdown signal, test time was about 10.000000 seconds 00:30:52.723 00:30:52.723 Latency(us) 00:30:52.723 [2024-11-26T18:32:15.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.723 [2024-11-26T18:32:15.837Z] =================================================================================================================== 00:30:52.723 [2024-11-26T18:32:15.837Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3940542 00:30:52.723 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.983 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.242 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:53.242 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:53.501 [2024-11-26 19:32:16.534732] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:53.501 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:53.760 request: 00:30:53.760 { 00:30:53.760 "uuid": "178249d1-0516-47d2-94fd-4f5bb90d7f2a", 00:30:53.760 "method": "bdev_lvol_get_lvstores", 00:30:53.760 "req_id": 1 00:30:53.760 } 00:30:53.760 Got JSON-RPC error response 00:30:53.760 response: 00:30:53.760 { 00:30:53.760 "code": -19, 00:30:53.760 "message": "No such device" 00:30:53.760 } 00:30:53.760 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:53.760 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:53.760 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:53.760 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:53.760 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:54.019 aio_bdev 00:30:54.019 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev daed38a7-f1fc-4136-9e13-b5b501121456 00:30:54.019 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=daed38a7-f1fc-4136-9e13-b5b501121456 00:30:54.019 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:54.019 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:54.019 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:54.019 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:54.019 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:54.278 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b daed38a7-f1fc-4136-9e13-b5b501121456 -t 2000 00:30:54.278 [ 00:30:54.278 { 00:30:54.278 "name": "daed38a7-f1fc-4136-9e13-b5b501121456", 00:30:54.278 "aliases": [ 00:30:54.278 "lvs/lvol" 00:30:54.278 ], 00:30:54.278 "product_name": "Logical Volume", 00:30:54.278 "block_size": 4096, 00:30:54.278 "num_blocks": 38912, 00:30:54.278 "uuid": "daed38a7-f1fc-4136-9e13-b5b501121456", 00:30:54.278 "assigned_rate_limits": { 00:30:54.278 "rw_ios_per_sec": 0, 00:30:54.278 "rw_mbytes_per_sec": 0, 00:30:54.278 "r_mbytes_per_sec": 0, 00:30:54.278 "w_mbytes_per_sec": 0 00:30:54.278 }, 00:30:54.278 "claimed": false, 00:30:54.278 "zoned": false, 00:30:54.278 "supported_io_types": { 00:30:54.278 "read": true, 00:30:54.278 "write": true, 00:30:54.278 "unmap": true, 00:30:54.278 "flush": false, 00:30:54.278 "reset": true, 00:30:54.278 "nvme_admin": false, 00:30:54.278 "nvme_io": false, 00:30:54.278 "nvme_io_md": false, 00:30:54.278 "write_zeroes": true, 00:30:54.278 "zcopy": false, 00:30:54.278 "get_zone_info": false, 00:30:54.278 "zone_management": false, 00:30:54.278 "zone_append": false, 00:30:54.278 "compare": false, 00:30:54.278 "compare_and_write": false, 00:30:54.278 "abort": false, 00:30:54.278 "seek_hole": true, 00:30:54.278 "seek_data": true, 00:30:54.278 "copy": false, 00:30:54.279 "nvme_iov_md": false 00:30:54.279 }, 00:30:54.279 "driver_specific": { 00:30:54.279 "lvol": { 00:30:54.279 "lvol_store_uuid": "178249d1-0516-47d2-94fd-4f5bb90d7f2a", 00:30:54.279 "base_bdev": "aio_bdev", 00:30:54.279 "thin_provision": false, 00:30:54.279 "num_allocated_clusters": 38, 00:30:54.279 "snapshot": false, 00:30:54.279 "clone": false, 00:30:54.279 "esnap_clone": false 00:30:54.279 } 00:30:54.279 } 00:30:54.279 } 00:30:54.279 ] 00:30:54.279 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:54.279 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:54.279 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:54.536 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:54.536 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:54.536 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:54.794 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:54.794 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete daed38a7-f1fc-4136-9e13-b5b501121456 00:30:54.794 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 178249d1-0516-47d2-94fd-4f5bb90d7f2a 00:30:55.053 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.313 00:30:55.313 real 0m15.686s 00:30:55.313 user 0m15.165s 00:30:55.313 sys 0m1.518s 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:55.313 ************************************ 00:30:55.313 END TEST lvs_grow_clean 00:30:55.313 ************************************ 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:55.313 ************************************ 00:30:55.313 START TEST lvs_grow_dirty 00:30:55.313 ************************************ 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.313 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:55.572 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:55.572 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:55.830 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8abe9464-0da9-46f6-a4c4-ae446a151079 00:30:55.830 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:30:55.830 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:56.088 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:56.088 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:56.088 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8abe9464-0da9-46f6-a4c4-ae446a151079 lvol 150 00:30:56.346 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ca90c045-f09b-47f2-9df9-ee5f4c982aa4 00:30:56.346 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:56.346 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:56.346 [2024-11-26 19:32:19.390646] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:56.346 [2024-11-26 19:32:19.390801] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:56.346 true 00:30:56.346 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:30:56.346 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:56.604 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:56.604 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:56.863 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ca90c045-f09b-47f2-9df9-ee5f4c982aa4 00:30:56.863 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.121 [2024-11-26 19:32:20.135206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.121 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3943068 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3943068 /var/tmp/bdevperf.sock 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3943068 ']' 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:57.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.379 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:57.379 [2024-11-26 19:32:20.411101] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:30:57.379 [2024-11-26 19:32:20.411155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943068 ] 00:30:57.379 [2024-11-26 19:32:20.486342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.638 [2024-11-26 19:32:20.528866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.638 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.638 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:57.638 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:58.203 Nvme0n1 00:30:58.203 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:58.203 [ 00:30:58.203 { 00:30:58.203 "name": "Nvme0n1", 00:30:58.203 "aliases": [ 00:30:58.203 "ca90c045-f09b-47f2-9df9-ee5f4c982aa4" 00:30:58.203 ], 00:30:58.203 "product_name": "NVMe disk", 00:30:58.203 "block_size": 4096, 00:30:58.203 "num_blocks": 38912, 00:30:58.203 "uuid": "ca90c045-f09b-47f2-9df9-ee5f4c982aa4", 00:30:58.203 "numa_id": 1, 00:30:58.203 "assigned_rate_limits": { 00:30:58.203 "rw_ios_per_sec": 0, 00:30:58.203 "rw_mbytes_per_sec": 0, 00:30:58.203 "r_mbytes_per_sec": 0, 00:30:58.203 "w_mbytes_per_sec": 0 00:30:58.203 }, 00:30:58.203 "claimed": false, 00:30:58.203 "zoned": false, 00:30:58.203 "supported_io_types": { 00:30:58.203 "read": true, 00:30:58.203 "write": true, 00:30:58.203 "unmap": true, 00:30:58.203 "flush": true, 00:30:58.203 "reset": true, 00:30:58.203 "nvme_admin": true, 00:30:58.203 "nvme_io": true, 00:30:58.203 "nvme_io_md": false, 00:30:58.203 "write_zeroes": true, 00:30:58.203 "zcopy": false, 00:30:58.203 "get_zone_info": false, 00:30:58.204 "zone_management": false, 00:30:58.204 "zone_append": false, 00:30:58.204 "compare": true, 00:30:58.204 "compare_and_write": true, 00:30:58.204 "abort": true, 00:30:58.204 "seek_hole": false, 00:30:58.204 "seek_data": false, 00:30:58.204 "copy": true, 00:30:58.204 "nvme_iov_md": false 00:30:58.204 }, 00:30:58.204 "memory_domains": [ 00:30:58.204 { 00:30:58.204 "dma_device_id": "system", 00:30:58.204 "dma_device_type": 1 00:30:58.204 } 00:30:58.204 ], 00:30:58.204 "driver_specific": { 00:30:58.204 "nvme": [ 00:30:58.204 { 00:30:58.204 "trid": { 00:30:58.204 "trtype": "TCP", 00:30:58.204 "adrfam": "IPv4", 00:30:58.204 "traddr": "10.0.0.2", 00:30:58.204 "trsvcid": "4420", 00:30:58.204 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:58.204 }, 00:30:58.204 "ctrlr_data": { 00:30:58.204 "cntlid": 1, 00:30:58.204 "vendor_id": "0x8086", 00:30:58.204 "model_number": "SPDK bdev Controller", 00:30:58.204 "serial_number": "SPDK0", 00:30:58.204 "firmware_revision": "25.01", 00:30:58.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.204 "oacs": { 00:30:58.204 "security": 0, 00:30:58.204 "format": 0, 00:30:58.204 "firmware": 0, 00:30:58.204 "ns_manage": 0 00:30:58.204 }, 00:30:58.204 "multi_ctrlr": true, 00:30:58.204 "ana_reporting": false 00:30:58.204 }, 00:30:58.204 "vs": { 00:30:58.204 "nvme_version": "1.3" 00:30:58.204 }, 00:30:58.204 "ns_data": { 00:30:58.204 "id": 1, 00:30:58.204 "can_share": true 00:30:58.204 } 00:30:58.204 } 00:30:58.204 ], 00:30:58.204 "mp_policy": "active_passive" 00:30:58.204 } 00:30:58.204 } 00:30:58.204 ] 00:30:58.204 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3943247 00:30:58.204 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:58.204 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:58.204 Running I/O for 10 seconds... 00:30:59.581 Latency(us) 00:30:59.581 [2024-11-26T18:32:22.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.581 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:59.581 [2024-11-26T18:32:22.695Z] =================================================================================================================== 00:30:59.581 [2024-11-26T18:32:22.695Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:59.581 00:31:00.148 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:00.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:00.407 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:00.407 [2024-11-26T18:32:23.521Z] =================================================================================================================== 00:31:00.407 [2024-11-26T18:32:23.521Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:00.407 00:31:00.407 true 00:31:00.407 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:00.407 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:00.666 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:00.666 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:00.666 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3943247 00:31:01.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.233 Nvme0n1 : 3.00 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:31:01.233 [2024-11-26T18:32:24.347Z] =================================================================================================================== 00:31:01.233 [2024-11-26T18:32:24.347Z] Total : 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:31:01.233 00:31:02.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.610 Nvme0n1 : 4.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:31:02.610 [2024-11-26T18:32:25.725Z] =================================================================================================================== 00:31:02.611 [2024-11-26T18:32:25.725Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:31:02.611 00:31:03.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.547 Nvme0n1 : 5.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:31:03.547 [2024-11-26T18:32:26.661Z] =================================================================================================================== 00:31:03.547 [2024-11-26T18:32:26.661Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:31:03.547 00:31:04.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.484 Nvme0n1 : 6.00 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:31:04.484 [2024-11-26T18:32:27.598Z] =================================================================================================================== 00:31:04.484 [2024-11-26T18:32:27.598Z] Total : 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:31:04.484 00:31:05.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.421 Nvme0n1 : 7.00 23440.57 91.56 0.00 0.00 0.00 0.00 0.00 00:31:05.421 [2024-11-26T18:32:28.535Z] =================================================================================================================== 00:31:05.421 [2024-11-26T18:32:28.535Z] Total : 23440.57 91.56 0.00 0.00 0.00 0.00 0.00 00:31:05.421 00:31:06.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.357 Nvme0n1 : 8.00 23479.12 91.72 0.00 0.00 0.00 0.00 0.00 00:31:06.357 [2024-11-26T18:32:29.471Z] =================================================================================================================== 00:31:06.357 [2024-11-26T18:32:29.471Z] Total : 23479.12 91.72 0.00 0.00 0.00 0.00 0.00 00:31:06.357 00:31:07.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.292 Nvme0n1 : 9.00 23473.89 91.69 0.00 0.00 0.00 0.00 0.00 00:31:07.292 [2024-11-26T18:32:30.406Z] =================================================================================================================== 00:31:07.292 [2024-11-26T18:32:30.406Z] Total : 23473.89 91.69 0.00 0.00 0.00 0.00 0.00 00:31:07.292 00:31:08.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.228 Nvme0n1 : 10.00 23487.20 91.75 0.00 0.00 0.00 0.00 0.00 00:31:08.228 [2024-11-26T18:32:31.342Z] =================================================================================================================== 00:31:08.228 [2024-11-26T18:32:31.342Z] Total : 23487.20 91.75 0.00 0.00 0.00 0.00 0.00 00:31:08.228 00:31:08.228 00:31:08.228 Latency(us) 00:31:08.228 [2024-11-26T18:32:31.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.229 Nvme0n1 : 10.01 23485.54 91.74 0.00 0.00 5447.23 3198.78 25715.08 00:31:08.229 [2024-11-26T18:32:31.343Z] =================================================================================================================== 00:31:08.229 [2024-11-26T18:32:31.343Z] Total : 23485.54 91.74 0.00 0.00 5447.23 3198.78 25715.08 00:31:08.229 { 00:31:08.229 "results": [ 00:31:08.229 { 00:31:08.229 "job": "Nvme0n1", 00:31:08.229 "core_mask": "0x2", 00:31:08.229 "workload": "randwrite", 00:31:08.229 "status": "finished", 00:31:08.229 "queue_depth": 128, 00:31:08.229 "io_size": 4096, 00:31:08.229 "runtime": 10.006156, 00:31:08.229 "iops": 23485.54230016002, 00:31:08.229 "mibps": 91.74039961000008, 00:31:08.229 "io_failed": 0, 00:31:08.229 "io_timeout": 0, 00:31:08.229 "avg_latency_us": 5447.231473475178, 00:31:08.229 "min_latency_us": 3198.7809523809524, 00:31:08.229 "max_latency_us": 25715.078095238096 00:31:08.229 } 00:31:08.229 ], 00:31:08.229 "core_count": 1 00:31:08.229 } 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3943068 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3943068 ']' 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3943068 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3943068 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3943068' 00:31:08.489 killing process with pid 3943068 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3943068 00:31:08.489 Received shutdown signal, test time was about 10.000000 seconds 00:31:08.489 00:31:08.489 Latency(us) 00:31:08.489 [2024-11-26T18:32:31.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.489 [2024-11-26T18:32:31.603Z] =================================================================================================================== 00:31:08.489 [2024-11-26T18:32:31.603Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3943068 00:31:08.489 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:08.748 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.007 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:09.007 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3940150 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3940150 00:31:09.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3940150 Killed "${NVMF_APP[@]}" "$@" 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3945081 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3945081 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3945081 ']' 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.266 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:09.266 [2024-11-26 19:32:32.306259] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:09.266 [2024-11-26 19:32:32.307171] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:31:09.266 [2024-11-26 19:32:32.307212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.526 [2024-11-26 19:32:32.386920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.526 [2024-11-26 19:32:32.426930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.526 [2024-11-26 19:32:32.426966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.526 [2024-11-26 19:32:32.426973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.526 [2024-11-26 19:32:32.426979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.526 [2024-11-26 19:32:32.426984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.526 [2024-11-26 19:32:32.427523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.526 [2024-11-26 19:32:32.496212] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:09.526 [2024-11-26 19:32:32.496422] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:10.094 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.094 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:10.094 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:10.094 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:10.094 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:10.094 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.094 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:10.352 [2024-11-26 19:32:33.344969] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:10.352 [2024-11-26 19:32:33.345168] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:10.352 [2024-11-26 19:32:33.345251] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:10.352 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:10.352 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ca90c045-f09b-47f2-9df9-ee5f4c982aa4 00:31:10.352 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ca90c045-f09b-47f2-9df9-ee5f4c982aa4 00:31:10.352 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:10.352 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:10.352 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:10.352 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:10.352 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.610 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ca90c045-f09b-47f2-9df9-ee5f4c982aa4 -t 2000 00:31:10.869 [ 00:31:10.869 { 00:31:10.869 "name": "ca90c045-f09b-47f2-9df9-ee5f4c982aa4", 00:31:10.869 "aliases": [ 00:31:10.869 "lvs/lvol" 00:31:10.869 ], 00:31:10.869 "product_name": "Logical Volume", 00:31:10.869 "block_size": 4096, 00:31:10.869 "num_blocks": 38912, 00:31:10.869 "uuid": "ca90c045-f09b-47f2-9df9-ee5f4c982aa4", 00:31:10.869 "assigned_rate_limits": { 00:31:10.869 "rw_ios_per_sec": 0, 00:31:10.869 "rw_mbytes_per_sec": 0, 00:31:10.869 "r_mbytes_per_sec": 0, 00:31:10.869 "w_mbytes_per_sec": 0 00:31:10.869 }, 00:31:10.869 "claimed": false, 00:31:10.869 "zoned": false, 00:31:10.869 "supported_io_types": { 00:31:10.869 "read": true, 00:31:10.869 "write": true, 00:31:10.869 "unmap": true, 00:31:10.869 "flush": false, 00:31:10.869 "reset": true, 00:31:10.869 "nvme_admin": false, 00:31:10.869 "nvme_io": false, 00:31:10.869 "nvme_io_md": false, 00:31:10.869 "write_zeroes": true, 00:31:10.869 "zcopy": false, 00:31:10.869 "get_zone_info": false, 00:31:10.869 "zone_management": false, 00:31:10.869 "zone_append": false, 00:31:10.869 "compare": false, 00:31:10.869 "compare_and_write": false, 00:31:10.869 "abort": false, 00:31:10.869 "seek_hole": true, 00:31:10.869 "seek_data": true, 00:31:10.869 "copy": false, 00:31:10.869 "nvme_iov_md": false 00:31:10.869 }, 00:31:10.869 "driver_specific": { 00:31:10.869 "lvol": { 00:31:10.869 "lvol_store_uuid": "8abe9464-0da9-46f6-a4c4-ae446a151079", 00:31:10.869 "base_bdev": "aio_bdev", 00:31:10.869 "thin_provision": false, 00:31:10.869 "num_allocated_clusters": 38, 00:31:10.869 "snapshot": false, 00:31:10.869 "clone": false, 00:31:10.869 "esnap_clone": false 00:31:10.869 } 00:31:10.869 } 00:31:10.869 } 00:31:10.869 ] 00:31:10.869 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:10.869 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:10.869 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:10.869 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:10.869 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:10.869 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:11.128 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:11.128 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:11.387 [2024-11-26 19:32:34.283980] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:11.387 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:11.647 request: 00:31:11.647 { 00:31:11.647 "uuid": "8abe9464-0da9-46f6-a4c4-ae446a151079", 00:31:11.647 "method": "bdev_lvol_get_lvstores", 00:31:11.647 "req_id": 1 00:31:11.647 } 00:31:11.647 Got JSON-RPC error response 00:31:11.647 response: 00:31:11.647 { 00:31:11.647 "code": -19, 00:31:11.647 "message": "No such device" 00:31:11.647 } 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:11.647 aio_bdev 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ca90c045-f09b-47f2-9df9-ee5f4c982aa4 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ca90c045-f09b-47f2-9df9-ee5f4c982aa4 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:11.647 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:11.905 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ca90c045-f09b-47f2-9df9-ee5f4c982aa4 -t 2000 00:31:12.164 [ 00:31:12.164 { 00:31:12.164 "name": "ca90c045-f09b-47f2-9df9-ee5f4c982aa4", 00:31:12.164 "aliases": [ 00:31:12.164 "lvs/lvol" 00:31:12.164 ], 00:31:12.164 "product_name": "Logical Volume", 00:31:12.164 "block_size": 4096, 00:31:12.164 "num_blocks": 38912, 00:31:12.164 "uuid": "ca90c045-f09b-47f2-9df9-ee5f4c982aa4", 00:31:12.164 "assigned_rate_limits": { 00:31:12.164 "rw_ios_per_sec": 0, 00:31:12.164 "rw_mbytes_per_sec": 0, 00:31:12.164 "r_mbytes_per_sec": 0, 00:31:12.164 "w_mbytes_per_sec": 0 00:31:12.164 }, 00:31:12.164 "claimed": false, 00:31:12.164 "zoned": false, 00:31:12.164 "supported_io_types": { 00:31:12.164 "read": true, 00:31:12.164 "write": true, 00:31:12.164 "unmap": true, 00:31:12.164 "flush": false, 00:31:12.164 "reset": true, 00:31:12.164 "nvme_admin": false, 00:31:12.164 "nvme_io": false, 00:31:12.164 "nvme_io_md": false, 00:31:12.164 "write_zeroes": true, 00:31:12.164 "zcopy": false, 00:31:12.164 "get_zone_info": false, 00:31:12.164 "zone_management": false, 00:31:12.164 "zone_append": false, 00:31:12.164 "compare": false, 00:31:12.164 "compare_and_write": false, 00:31:12.164 "abort": false, 00:31:12.164 "seek_hole": true, 00:31:12.164 "seek_data": true, 00:31:12.164 "copy": false, 00:31:12.164 "nvme_iov_md": false 00:31:12.164 }, 00:31:12.164 "driver_specific": { 00:31:12.164 "lvol": { 00:31:12.164 "lvol_store_uuid": "8abe9464-0da9-46f6-a4c4-ae446a151079", 00:31:12.164 "base_bdev": "aio_bdev", 00:31:12.164 "thin_provision": false, 00:31:12.164 "num_allocated_clusters": 38, 00:31:12.164 "snapshot": false, 00:31:12.164 "clone": false, 00:31:12.164 "esnap_clone": false 00:31:12.164 } 00:31:12.164 } 00:31:12.164 } 00:31:12.164 ] 00:31:12.164 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:12.164 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:12.164 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:12.424 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:12.424 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:12.424 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:12.424 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:12.424 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ca90c045-f09b-47f2-9df9-ee5f4c982aa4 00:31:12.683 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8abe9464-0da9-46f6-a4c4-ae446a151079 00:31:12.940 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:13.199 00:31:13.199 real 0m17.738s 00:31:13.199 user 0m34.540s 00:31:13.199 sys 0m3.974s 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:13.199 ************************************ 00:31:13.199 END TEST lvs_grow_dirty 00:31:13.199 ************************************ 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:13.199 nvmf_trace.0 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.199 rmmod nvme_tcp 00:31:13.199 rmmod nvme_fabrics 00:31:13.199 rmmod nvme_keyring 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3945081 ']' 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3945081 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3945081 ']' 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3945081 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.199 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3945081 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3945081' 00:31:13.462 killing process with pid 3945081 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3945081 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3945081 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.462 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.997 00:31:15.997 real 0m42.607s 00:31:15.997 user 0m52.214s 00:31:15.997 sys 0m10.395s 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:15.997 ************************************ 00:31:15.997 END TEST nvmf_lvs_grow 00:31:15.997 ************************************ 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.997 ************************************ 00:31:15.997 START TEST nvmf_bdev_io_wait 00:31:15.997 ************************************ 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:15.997 * Looking for test storage... 00:31:15.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.997 --rc genhtml_branch_coverage=1 00:31:15.997 --rc genhtml_function_coverage=1 00:31:15.997 --rc genhtml_legend=1 00:31:15.997 --rc geninfo_all_blocks=1 00:31:15.997 --rc geninfo_unexecuted_blocks=1 00:31:15.997 00:31:15.997 ' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.997 --rc genhtml_branch_coverage=1 00:31:15.997 --rc genhtml_function_coverage=1 00:31:15.997 --rc genhtml_legend=1 00:31:15.997 --rc geninfo_all_blocks=1 00:31:15.997 --rc geninfo_unexecuted_blocks=1 00:31:15.997 00:31:15.997 ' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.997 --rc genhtml_branch_coverage=1 00:31:15.997 --rc genhtml_function_coverage=1 00:31:15.997 --rc genhtml_legend=1 00:31:15.997 --rc geninfo_all_blocks=1 00:31:15.997 --rc geninfo_unexecuted_blocks=1 00:31:15.997 00:31:15.997 ' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.997 --rc genhtml_branch_coverage=1 00:31:15.997 --rc genhtml_function_coverage=1 00:31:15.997 --rc genhtml_legend=1 00:31:15.997 --rc geninfo_all_blocks=1 00:31:15.997 --rc geninfo_unexecuted_blocks=1 00:31:15.997 00:31:15.997 ' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.997 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.998 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.577 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:22.577 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:22.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:22.578 Found net devices under 0000:86:00.0: cvl_0_0 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:22.578 Found net devices under 0000:86:00.1: cvl_0_1 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:31:22.578 00:31:22.578 --- 10.0.0.2 ping statistics --- 00:31:22.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.578 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:31:22.578 00:31:22.578 --- 10.0.0.1 ping statistics --- 00:31:22.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.578 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3949664 00:31:22.578 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3949664 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3949664 ']' 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 [2024-11-26 19:32:44.846248] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.579 [2024-11-26 19:32:44.847147] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:31:22.579 [2024-11-26 19:32:44.847181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.579 [2024-11-26 19:32:44.924558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.579 [2024-11-26 19:32:44.967434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.579 [2024-11-26 19:32:44.967470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.579 [2024-11-26 19:32:44.967477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.579 [2024-11-26 19:32:44.967483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.579 [2024-11-26 19:32:44.967489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.579 [2024-11-26 19:32:44.969080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.579 [2024-11-26 19:32:44.969096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.579 [2024-11-26 19:32:44.969185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.579 [2024-11-26 19:32:44.969186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.579 [2024-11-26 19:32:44.969594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.579 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 [2024-11-26 19:32:45.090558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:22.579 [2024-11-26 19:32:45.091650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:22.579 [2024-11-26 19:32:45.091939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:22.579 [2024-11-26 19:32:45.092028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 [2024-11-26 19:32:45.102098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 Malloc0 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.579 [2024-11-26 19:32:45.174224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3949802 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3949805 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.579 { 00:31:22.579 "params": { 00:31:22.579 "name": "Nvme$subsystem", 00:31:22.579 "trtype": "$TEST_TRANSPORT", 00:31:22.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.579 "adrfam": "ipv4", 00:31:22.579 "trsvcid": "$NVMF_PORT", 00:31:22.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.579 "hdgst": ${hdgst:-false}, 00:31:22.579 "ddgst": ${ddgst:-false} 00:31:22.579 }, 00:31:22.579 "method": "bdev_nvme_attach_controller" 00:31:22.579 } 00:31:22.579 EOF 00:31:22.579 )") 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3949807 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3949810 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.579 { 00:31:22.579 "params": { 00:31:22.579 "name": "Nvme$subsystem", 00:31:22.579 "trtype": "$TEST_TRANSPORT", 00:31:22.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.579 "adrfam": "ipv4", 00:31:22.579 "trsvcid": "$NVMF_PORT", 00:31:22.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.579 "hdgst": ${hdgst:-false}, 00:31:22.579 "ddgst": ${ddgst:-false} 00:31:22.579 }, 00:31:22.579 "method": "bdev_nvme_attach_controller" 00:31:22.579 } 00:31:22.579 EOF 00:31:22.579 )") 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:22.579 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.580 { 00:31:22.580 "params": { 00:31:22.580 "name": "Nvme$subsystem", 00:31:22.580 "trtype": "$TEST_TRANSPORT", 00:31:22.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.580 "adrfam": "ipv4", 00:31:22.580 "trsvcid": "$NVMF_PORT", 00:31:22.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.580 "hdgst": ${hdgst:-false}, 00:31:22.580 "ddgst": ${ddgst:-false} 00:31:22.580 }, 00:31:22.580 "method": "bdev_nvme_attach_controller" 00:31:22.580 } 00:31:22.580 EOF 00:31:22.580 )") 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.580 { 00:31:22.580 "params": { 00:31:22.580 "name": "Nvme$subsystem", 00:31:22.580 "trtype": "$TEST_TRANSPORT", 00:31:22.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.580 "adrfam": "ipv4", 00:31:22.580 "trsvcid": "$NVMF_PORT", 00:31:22.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.580 "hdgst": ${hdgst:-false}, 00:31:22.580 "ddgst": ${ddgst:-false} 00:31:22.580 }, 00:31:22.580 "method": "bdev_nvme_attach_controller" 00:31:22.580 } 00:31:22.580 EOF 00:31:22.580 )") 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3949802 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:22.580 "params": { 00:31:22.580 "name": "Nvme1", 00:31:22.580 "trtype": "tcp", 00:31:22.580 "traddr": "10.0.0.2", 00:31:22.580 "adrfam": "ipv4", 00:31:22.580 "trsvcid": "4420", 00:31:22.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.580 "hdgst": false, 00:31:22.580 "ddgst": false 00:31:22.580 }, 00:31:22.580 "method": "bdev_nvme_attach_controller" 00:31:22.580 }' 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:22.580 "params": { 00:31:22.580 "name": "Nvme1", 00:31:22.580 "trtype": "tcp", 00:31:22.580 "traddr": "10.0.0.2", 00:31:22.580 "adrfam": "ipv4", 00:31:22.580 "trsvcid": "4420", 00:31:22.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.580 "hdgst": false, 00:31:22.580 "ddgst": false 00:31:22.580 }, 00:31:22.580 "method": "bdev_nvme_attach_controller" 00:31:22.580 }' 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:22.580 "params": { 00:31:22.580 "name": "Nvme1", 00:31:22.580 "trtype": "tcp", 00:31:22.580 "traddr": "10.0.0.2", 00:31:22.580 "adrfam": "ipv4", 00:31:22.580 "trsvcid": "4420", 00:31:22.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.580 "hdgst": false, 00:31:22.580 "ddgst": false 00:31:22.580 }, 00:31:22.580 "method": "bdev_nvme_attach_controller" 00:31:22.580 }' 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:22.580 19:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:22.580 "params": { 00:31:22.580 "name": "Nvme1", 00:31:22.580 "trtype": "tcp", 00:31:22.580 "traddr": "10.0.0.2", 00:31:22.580 "adrfam": "ipv4", 00:31:22.580 "trsvcid": "4420", 00:31:22.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.580 "hdgst": false, 00:31:22.580 "ddgst": false 00:31:22.580 }, 00:31:22.580 "method": "bdev_nvme_attach_controller" 00:31:22.580 }' 00:31:22.580 [2024-11-26 19:32:45.225244] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:31:22.580 [2024-11-26 19:32:45.225295] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:22.580 [2024-11-26 19:32:45.226506] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:31:22.580 [2024-11-26 19:32:45.226552] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:22.580 [2024-11-26 19:32:45.230663] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:31:22.580 [2024-11-26 19:32:45.230718] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:22.580 [2024-11-26 19:32:45.232253] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:31:22.580 [2024-11-26 19:32:45.232296] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:22.580 [2024-11-26 19:32:45.417802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.580 [2024-11-26 19:32:45.460205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:22.580 [2024-11-26 19:32:45.511661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.580 [2024-11-26 19:32:45.567696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:22.580 [2024-11-26 19:32:45.579466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.580 [2024-11-26 19:32:45.621687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:22.580 [2024-11-26 19:32:45.642010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.580 [2024-11-26 19:32:45.682520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:22.838 Running I/O for 1 seconds... 00:31:22.838 Running I/O for 1 seconds... 00:31:22.838 Running I/O for 1 seconds... 00:31:22.838 Running I/O for 1 seconds... 00:31:23.771 245792.00 IOPS, 960.12 MiB/s 00:31:23.771 Latency(us) 00:31:23.771 [2024-11-26T18:32:46.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.771 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:23.771 Nvme1n1 : 1.00 245421.80 958.68 0.00 0.00 518.63 219.43 1490.16 00:31:23.771 [2024-11-26T18:32:46.885Z] =================================================================================================================== 00:31:23.771 [2024-11-26T18:32:46.885Z] Total : 245421.80 958.68 0.00 0.00 518.63 219.43 1490.16 00:31:23.771 12104.00 IOPS, 47.28 MiB/s 00:31:23.771 Latency(us) 00:31:23.771 [2024-11-26T18:32:46.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.771 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:23.771 Nvme1n1 : 1.01 12154.98 47.48 0.00 0.00 10494.83 3479.65 12170.97 00:31:23.771 [2024-11-26T18:32:46.885Z] =================================================================================================================== 00:31:23.771 [2024-11-26T18:32:46.885Z] Total : 12154.98 47.48 0.00 0.00 10494.83 3479.65 12170.97 00:31:23.771 11592.00 IOPS, 45.28 MiB/s 00:31:23.771 Latency(us) 00:31:23.771 [2024-11-26T18:32:46.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.771 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:23.771 Nvme1n1 : 1.01 11672.13 45.59 0.00 0.00 10937.82 1880.26 13856.18 00:31:23.771 [2024-11-26T18:32:46.885Z] =================================================================================================================== 00:31:23.771 [2024-11-26T18:32:46.885Z] Total : 11672.13 45.59 0.00 0.00 10937.82 1880.26 13856.18 00:31:23.771 9988.00 IOPS, 39.02 MiB/s 00:31:23.771 Latency(us) 00:31:23.771 [2024-11-26T18:32:46.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.771 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:23.771 Nvme1n1 : 1.01 10081.89 39.38 0.00 0.00 12666.04 1654.00 18474.91 00:31:23.771 [2024-11-26T18:32:46.885Z] =================================================================================================================== 00:31:23.771 [2024-11-26T18:32:46.885Z] Total : 10081.89 39.38 0.00 0.00 12666.04 1654.00 18474.91 00:31:24.030 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3949805 00:31:24.030 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3949807 00:31:24.030 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3949810 00:31:24.030 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.030 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.031 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.031 rmmod nvme_tcp 00:31:24.031 rmmod nvme_fabrics 00:31:24.031 rmmod nvme_keyring 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3949664 ']' 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3949664 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3949664 ']' 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3949664 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3949664 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3949664' 00:31:24.031 killing process with pid 3949664 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3949664 00:31:24.031 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3949664 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.290 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.195 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:26.195 00:31:26.195 real 0m10.620s 00:31:26.195 user 0m14.234s 00:31:26.195 sys 0m6.591s 00:31:26.195 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.195 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:26.195 ************************************ 00:31:26.195 END TEST nvmf_bdev_io_wait 00:31:26.195 ************************************ 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:26.455 ************************************ 00:31:26.455 START TEST nvmf_queue_depth 00:31:26.455 ************************************ 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:26.455 * Looking for test storage... 00:31:26.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:26.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.455 --rc genhtml_branch_coverage=1 00:31:26.455 --rc genhtml_function_coverage=1 00:31:26.455 --rc genhtml_legend=1 00:31:26.455 --rc geninfo_all_blocks=1 00:31:26.455 --rc geninfo_unexecuted_blocks=1 00:31:26.455 00:31:26.455 ' 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:26.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.455 --rc genhtml_branch_coverage=1 00:31:26.455 --rc genhtml_function_coverage=1 00:31:26.455 --rc genhtml_legend=1 00:31:26.455 --rc geninfo_all_blocks=1 00:31:26.455 --rc geninfo_unexecuted_blocks=1 00:31:26.455 00:31:26.455 ' 00:31:26.455 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:26.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.455 --rc genhtml_branch_coverage=1 00:31:26.455 --rc genhtml_function_coverage=1 00:31:26.455 --rc genhtml_legend=1 00:31:26.456 --rc geninfo_all_blocks=1 00:31:26.456 --rc geninfo_unexecuted_blocks=1 00:31:26.456 00:31:26.456 ' 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.456 --rc genhtml_branch_coverage=1 00:31:26.456 --rc genhtml_function_coverage=1 00:31:26.456 --rc genhtml_legend=1 00:31:26.456 --rc geninfo_all_blocks=1 00:31:26.456 --rc geninfo_unexecuted_blocks=1 00:31:26.456 00:31:26.456 ' 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.456 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:26.715 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.716 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.716 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.716 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:26.716 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:26.716 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:26.716 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:31.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:31.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.989 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:31.990 Found net devices under 0000:86:00.0: cvl_0_0 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:31.990 Found net devices under 0000:86:00.1: cvl_0_1 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.990 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.249 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.249 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.249 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:31:32.250 00:31:32.250 --- 10.0.0.2 ping statistics --- 00:31:32.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.250 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:31:32.250 00:31:32.250 --- 10.0.0.1 ping statistics --- 00:31:32.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.250 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3954410 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3954410 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3954410 ']' 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.250 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.250 [2024-11-26 19:32:55.335913] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:32.250 [2024-11-26 19:32:55.336848] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:31:32.250 [2024-11-26 19:32:55.336882] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.510 [2024-11-26 19:32:55.418102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.510 [2024-11-26 19:32:55.463486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.510 [2024-11-26 19:32:55.463512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.510 [2024-11-26 19:32:55.463519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.510 [2024-11-26 19:32:55.463525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.510 [2024-11-26 19:32:55.463530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.510 [2024-11-26 19:32:55.464096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.510 [2024-11-26 19:32:55.531554] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:32.510 [2024-11-26 19:32:55.531789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.510 [2024-11-26 19:32:55.596723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.510 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.769 Malloc0 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.769 [2024-11-26 19:32:55.668891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3954574 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3954574 /var/tmp/bdevperf.sock 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3954574 ']' 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:32.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.769 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.769 [2024-11-26 19:32:55.721605] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:31:32.769 [2024-11-26 19:32:55.721650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954574 ] 00:31:32.769 [2024-11-26 19:32:55.797128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.769 [2024-11-26 19:32:55.842201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.028 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:33.028 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:33.028 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:33.028 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.028 19:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:33.028 NVMe0n1 00:31:33.028 19:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.028 19:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:33.286 Running I/O for 10 seconds... 00:31:35.158 11357.00 IOPS, 44.36 MiB/s [2024-11-26T18:32:59.648Z] 11646.50 IOPS, 45.49 MiB/s [2024-11-26T18:33:00.585Z] 11770.67 IOPS, 45.98 MiB/s [2024-11-26T18:33:01.521Z] 11962.50 IOPS, 46.73 MiB/s [2024-11-26T18:33:02.457Z] 11924.20 IOPS, 46.58 MiB/s [2024-11-26T18:33:03.394Z] 11954.00 IOPS, 46.70 MiB/s [2024-11-26T18:33:04.331Z] 12032.86 IOPS, 47.00 MiB/s [2024-11-26T18:33:05.268Z] 12071.50 IOPS, 47.15 MiB/s [2024-11-26T18:33:06.645Z] 12080.89 IOPS, 47.19 MiB/s [2024-11-26T18:33:06.645Z] 12086.80 IOPS, 47.21 MiB/s 00:31:43.531 Latency(us) 00:31:43.531 [2024-11-26T18:33:06.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.531 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:43.531 Verification LBA range: start 0x0 length 0x4000 00:31:43.531 NVMe0n1 : 10.05 12123.78 47.36 0.00 0.00 84198.53 13419.28 59668.97 00:31:43.531 [2024-11-26T18:33:06.645Z] =================================================================================================================== 00:31:43.531 [2024-11-26T18:33:06.645Z] Total : 12123.78 47.36 0.00 0.00 84198.53 13419.28 59668.97 00:31:43.531 { 00:31:43.531 "results": [ 00:31:43.531 { 00:31:43.531 "job": "NVMe0n1", 00:31:43.531 "core_mask": "0x1", 00:31:43.531 "workload": "verify", 00:31:43.531 "status": "finished", 00:31:43.531 "verify_range": { 00:31:43.531 "start": 0, 00:31:43.531 "length": 16384 00:31:43.531 }, 00:31:43.531 "queue_depth": 1024, 00:31:43.531 "io_size": 4096, 00:31:43.531 "runtime": 10.053962, 00:31:43.531 "iops": 12123.777670932117, 00:31:43.531 "mibps": 47.35850652707858, 00:31:43.531 "io_failed": 0, 00:31:43.531 "io_timeout": 0, 00:31:43.531 "avg_latency_us": 84198.53434206393, 00:31:43.531 "min_latency_us": 13419.27619047619, 00:31:43.531 "max_latency_us": 59668.96761904762 00:31:43.531 } 00:31:43.531 ], 00:31:43.531 "core_count": 1 00:31:43.531 } 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3954574 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3954574 ']' 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3954574 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3954574 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3954574' 00:31:43.531 killing process with pid 3954574 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3954574 00:31:43.531 Received shutdown signal, test time was about 10.000000 seconds 00:31:43.531 00:31:43.531 Latency(us) 00:31:43.531 [2024-11-26T18:33:06.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.531 [2024-11-26T18:33:06.645Z] =================================================================================================================== 00:31:43.531 [2024-11-26T18:33:06.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3954574 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:43.531 rmmod nvme_tcp 00:31:43.531 rmmod nvme_fabrics 00:31:43.531 rmmod nvme_keyring 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3954410 ']' 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3954410 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3954410 ']' 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3954410 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.531 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3954410 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3954410' 00:31:43.790 killing process with pid 3954410 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3954410 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3954410 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.790 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.352 19:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:46.352 00:31:46.352 real 0m19.535s 00:31:46.352 user 0m22.616s 00:31:46.352 sys 0m6.235s 00:31:46.352 19:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.352 19:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:46.352 ************************************ 00:31:46.352 END TEST nvmf_queue_depth 00:31:46.352 ************************************ 00:31:46.352 19:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:46.352 19:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:46.352 19:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.352 19:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:46.352 ************************************ 00:31:46.352 START TEST nvmf_target_multipath 00:31:46.352 ************************************ 00:31:46.352 19:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:46.352 * Looking for test storage... 00:31:46.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.352 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:46.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.353 --rc genhtml_branch_coverage=1 00:31:46.353 --rc genhtml_function_coverage=1 00:31:46.353 --rc genhtml_legend=1 00:31:46.353 --rc geninfo_all_blocks=1 00:31:46.353 --rc geninfo_unexecuted_blocks=1 00:31:46.353 00:31:46.353 ' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:46.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.353 --rc genhtml_branch_coverage=1 00:31:46.353 --rc genhtml_function_coverage=1 00:31:46.353 --rc genhtml_legend=1 00:31:46.353 --rc geninfo_all_blocks=1 00:31:46.353 --rc geninfo_unexecuted_blocks=1 00:31:46.353 00:31:46.353 ' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:46.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.353 --rc genhtml_branch_coverage=1 00:31:46.353 --rc genhtml_function_coverage=1 00:31:46.353 --rc genhtml_legend=1 00:31:46.353 --rc geninfo_all_blocks=1 00:31:46.353 --rc geninfo_unexecuted_blocks=1 00:31:46.353 00:31:46.353 ' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:46.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.353 --rc genhtml_branch_coverage=1 00:31:46.353 --rc genhtml_function_coverage=1 00:31:46.353 --rc genhtml_legend=1 00:31:46.353 --rc geninfo_all_blocks=1 00:31:46.353 --rc geninfo_unexecuted_blocks=1 00:31:46.353 00:31:46.353 ' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:46.353 19:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:51.769 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.769 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.769 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.769 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.769 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:51.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:51.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:51.770 Found net devices under 0000:86:00.0: cvl_0_0 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:51.770 Found net devices under 0000:86:00.1: cvl_0_1 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.770 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:52.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:52.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:31:52.030 00:31:52.030 --- 10.0.0.2 ping statistics --- 00:31:52.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.030 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:52.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:52.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:31:52.030 00:31:52.030 --- 10.0.0.1 ping statistics --- 00:31:52.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.030 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:52.030 only one NIC for nvmf test 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.030 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.030 rmmod nvme_tcp 00:31:52.030 rmmod nvme_fabrics 00:31:52.030 rmmod nvme_keyring 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:52.030 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:52.031 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.031 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.031 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.031 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.031 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.031 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.031 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.567 00:31:54.567 real 0m8.146s 00:31:54.567 user 0m1.668s 00:31:54.567 sys 0m4.458s 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:54.567 ************************************ 00:31:54.567 END TEST nvmf_target_multipath 00:31:54.567 ************************************ 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:54.567 ************************************ 00:31:54.567 START TEST nvmf_zcopy 00:31:54.567 ************************************ 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:54.567 * Looking for test storage... 00:31:54.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.567 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:54.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.568 --rc genhtml_branch_coverage=1 00:31:54.568 --rc genhtml_function_coverage=1 00:31:54.568 --rc genhtml_legend=1 00:31:54.568 --rc geninfo_all_blocks=1 00:31:54.568 --rc geninfo_unexecuted_blocks=1 00:31:54.568 00:31:54.568 ' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:54.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.568 --rc genhtml_branch_coverage=1 00:31:54.568 --rc genhtml_function_coverage=1 00:31:54.568 --rc genhtml_legend=1 00:31:54.568 --rc geninfo_all_blocks=1 00:31:54.568 --rc geninfo_unexecuted_blocks=1 00:31:54.568 00:31:54.568 ' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:54.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.568 --rc genhtml_branch_coverage=1 00:31:54.568 --rc genhtml_function_coverage=1 00:31:54.568 --rc genhtml_legend=1 00:31:54.568 --rc geninfo_all_blocks=1 00:31:54.568 --rc geninfo_unexecuted_blocks=1 00:31:54.568 00:31:54.568 ' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:54.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.568 --rc genhtml_branch_coverage=1 00:31:54.568 --rc genhtml_function_coverage=1 00:31:54.568 --rc genhtml_legend=1 00:31:54.568 --rc geninfo_all_blocks=1 00:31:54.568 --rc geninfo_unexecuted_blocks=1 00:31:54.568 00:31:54.568 ' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.568 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.569 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.569 19:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.839 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.839 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.839 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.839 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.839 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.840 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.099 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.099 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.099 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.099 19:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:32:00.099 00:32:00.099 --- 10.0.0.2 ping statistics --- 00:32:00.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.099 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:32:00.099 00:32:00.099 --- 10.0.0.1 ping statistics --- 00:32:00.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.099 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3966319 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3966319 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3966319 ']' 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.099 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.099 [2024-11-26 19:33:23.173355] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.099 [2024-11-26 19:33:23.174296] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:32:00.099 [2024-11-26 19:33:23.174330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.358 [2024-11-26 19:33:23.251902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.358 [2024-11-26 19:33:23.292711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.358 [2024-11-26 19:33:23.292742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.358 [2024-11-26 19:33:23.292751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.358 [2024-11-26 19:33:23.292762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.358 [2024-11-26 19:33:23.292768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.358 [2024-11-26 19:33:23.293298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.358 [2024-11-26 19:33:23.361304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.358 [2024-11-26 19:33:23.361522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.358 [2024-11-26 19:33:23.425924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.358 [2024-11-26 19:33:23.450101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.358 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.617 malloc0 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:00.617 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.618 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.618 { 00:32:00.618 "params": { 00:32:00.618 "name": "Nvme$subsystem", 00:32:00.618 "trtype": "$TEST_TRANSPORT", 00:32:00.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.618 "adrfam": "ipv4", 00:32:00.618 "trsvcid": "$NVMF_PORT", 00:32:00.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.618 "hdgst": ${hdgst:-false}, 00:32:00.618 "ddgst": ${ddgst:-false} 00:32:00.618 }, 00:32:00.618 "method": "bdev_nvme_attach_controller" 00:32:00.618 } 00:32:00.618 EOF 00:32:00.618 )") 00:32:00.618 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:00.618 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:00.618 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:00.618 19:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:00.618 "params": { 00:32:00.618 "name": "Nvme1", 00:32:00.618 "trtype": "tcp", 00:32:00.618 "traddr": "10.0.0.2", 00:32:00.618 "adrfam": "ipv4", 00:32:00.618 "trsvcid": "4420", 00:32:00.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:00.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:00.618 "hdgst": false, 00:32:00.618 "ddgst": false 00:32:00.618 }, 00:32:00.618 "method": "bdev_nvme_attach_controller" 00:32:00.618 }' 00:32:00.618 [2024-11-26 19:33:23.539620] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:32:00.618 [2024-11-26 19:33:23.539664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3966404 ] 00:32:00.618 [2024-11-26 19:33:23.616176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.618 [2024-11-26 19:33:23.658772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.877 Running I/O for 10 seconds... 00:32:03.191 8265.00 IOPS, 64.57 MiB/s [2024-11-26T18:33:27.243Z] 8291.00 IOPS, 64.77 MiB/s [2024-11-26T18:33:28.178Z] 8316.33 IOPS, 64.97 MiB/s [2024-11-26T18:33:29.115Z] 8324.75 IOPS, 65.04 MiB/s [2024-11-26T18:33:30.058Z] 8343.60 IOPS, 65.18 MiB/s [2024-11-26T18:33:30.993Z] 8377.33 IOPS, 65.45 MiB/s [2024-11-26T18:33:32.371Z] 8347.29 IOPS, 65.21 MiB/s [2024-11-26T18:33:33.308Z] 8357.25 IOPS, 65.29 MiB/s [2024-11-26T18:33:34.246Z] 8356.78 IOPS, 65.29 MiB/s [2024-11-26T18:33:34.246Z] 8349.80 IOPS, 65.23 MiB/s 00:32:11.132 Latency(us) 00:32:11.132 [2024-11-26T18:33:34.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.132 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:11.132 Verification LBA range: start 0x0 length 0x1000 00:32:11.132 Nvme1n1 : 10.01 8352.52 65.25 0.00 0.00 15281.04 2527.82 21346.01 00:32:11.132 [2024-11-26T18:33:34.246Z] =================================================================================================================== 00:32:11.132 [2024-11-26T18:33:34.246Z] Total : 8352.52 65.25 0.00 0.00 15281.04 2527.82 21346.01 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3969724 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:11.132 { 00:32:11.132 "params": { 00:32:11.132 "name": "Nvme$subsystem", 00:32:11.132 "trtype": "$TEST_TRANSPORT", 00:32:11.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.132 "adrfam": "ipv4", 00:32:11.132 "trsvcid": "$NVMF_PORT", 00:32:11.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.132 "hdgst": ${hdgst:-false}, 00:32:11.132 "ddgst": ${ddgst:-false} 00:32:11.132 }, 00:32:11.132 "method": "bdev_nvme_attach_controller" 00:32:11.132 } 00:32:11.132 EOF 00:32:11.132 )") 00:32:11.132 [2024-11-26 19:33:34.137635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.137666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:11.132 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:11.132 "params": { 00:32:11.132 "name": "Nvme1", 00:32:11.132 "trtype": "tcp", 00:32:11.132 "traddr": "10.0.0.2", 00:32:11.132 "adrfam": "ipv4", 00:32:11.132 "trsvcid": "4420", 00:32:11.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:11.132 "hdgst": false, 00:32:11.132 "ddgst": false 00:32:11.132 }, 00:32:11.132 "method": "bdev_nvme_attach_controller" 00:32:11.132 }' 00:32:11.132 [2024-11-26 19:33:34.149596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.149608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.132 [2024-11-26 19:33:34.161593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.161604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.132 [2024-11-26 19:33:34.173596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.173608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.132 [2024-11-26 19:33:34.176573] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:32:11.132 [2024-11-26 19:33:34.176616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3969724 ] 00:32:11.132 [2024-11-26 19:33:34.185601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.185613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.132 [2024-11-26 19:33:34.197591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.197601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.132 [2024-11-26 19:33:34.209594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.209603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.132 [2024-11-26 19:33:34.221593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.221603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.132 [2024-11-26 19:33:34.233595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.132 [2024-11-26 19:33:34.233605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.245597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.245608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.251153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.392 [2024-11-26 19:33:34.257594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.257610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.269595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.269610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.281594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.281604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.293597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.293610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.293950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.392 [2024-11-26 19:33:34.305606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.305621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.317601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.317620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.329608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.329627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.341597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.341609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.353597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.353610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.365593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.365604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.377603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.377620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.389608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.389625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.401608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.401626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.413606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.413623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.425611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.425629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.437608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.437628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.449603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.449619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.461598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.461612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.473595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.473604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.485595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.485606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.392 [2024-11-26 19:33:34.497595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.392 [2024-11-26 19:33:34.497605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.509598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.651 [2024-11-26 19:33:34.509611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.521593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.651 [2024-11-26 19:33:34.521602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.533594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.651 [2024-11-26 19:33:34.533603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.545600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.651 [2024-11-26 19:33:34.545613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.557593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.651 [2024-11-26 19:33:34.557604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.569594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.651 [2024-11-26 19:33:34.569604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.581594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.651 [2024-11-26 19:33:34.581604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.593601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.651 [2024-11-26 19:33:34.593616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.651 [2024-11-26 19:33:34.605601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.605617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 Running I/O for 5 seconds... 00:32:11.652 [2024-11-26 19:33:34.619544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.619563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.634223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.634241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.650124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.650144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.665303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.665332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.679946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.679965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.694664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.694690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.709428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.709449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.722616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.722636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.737735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.737755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.749388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.749408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.652 [2024-11-26 19:33:34.763480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.652 [2024-11-26 19:33:34.763500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.911 [2024-11-26 19:33:34.778246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.778265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.793735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.793755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.807586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.807609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.822228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.822247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.837606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.837626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.849451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.849471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.863505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.863525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.878257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.878276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.894180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.894200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.909584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.909603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.923783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.923802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.938532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.938553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.953318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.953337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.966550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.966567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.981777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.981796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:34.991886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:34.991913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:35.006804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:35.006823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.912 [2024-11-26 19:33:35.021911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.912 [2024-11-26 19:33:35.021929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.171 [2024-11-26 19:33:35.033139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.171 [2024-11-26 19:33:35.033157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.171 [2024-11-26 19:33:35.047708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.171 [2024-11-26 19:33:35.047727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.062510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.062528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.078027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.078045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.093648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.093666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.106685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.106703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.121769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.121789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.132735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.132755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.147840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.147859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.162817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.162835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.174094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.174113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.189274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.189293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.203740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.203758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.218795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.218814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.234382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.234401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.249294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.249312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.263783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.263808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.172 [2024-11-26 19:33:35.279582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.172 [2024-11-26 19:33:35.279602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.294827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.294847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.309986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.310005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.321827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.321845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.335249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.335268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.349812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.349832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.363423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.363442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.378096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.378114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.393706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.393724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.404936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.404955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.419306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.419327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.434439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.434458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.450264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.450285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.465702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.465723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.478410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.478431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.491371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.431 [2024-11-26 19:33:35.491391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.431 [2024-11-26 19:33:35.506320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.432 [2024-11-26 19:33:35.506341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.432 [2024-11-26 19:33:35.522101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.432 [2024-11-26 19:33:35.522122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.432 [2024-11-26 19:33:35.534329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.432 [2024-11-26 19:33:35.534353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.549345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.549364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.563193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.563213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.573709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.573728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.587817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.587838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.603489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.603509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 16497.00 IOPS, 128.88 MiB/s [2024-11-26T18:33:35.805Z] [2024-11-26 19:33:35.618642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.618661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.634087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.634107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.649729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.649750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.660235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.660255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.676455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.676476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.691081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.691104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.708480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.708502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.720989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.721010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.735962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.735981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.751207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.751226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.767021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.767041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.781619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.781637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.691 [2024-11-26 19:33:35.795399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.691 [2024-11-26 19:33:35.795419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.810238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.810258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.825602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.825622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.836643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.836663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.851838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.851858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.866722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.866742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.881804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.881825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.894139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.894158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.907360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.907380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.922241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.922262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.938050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.938074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.950910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.950929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.966594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.966615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.982059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.982077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:35.994077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:35.994094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:36.007500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:36.007519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:36.022974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:36.022994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:36.037428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:36.037448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.950 [2024-11-26 19:33:36.048881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.950 [2024-11-26 19:33:36.048899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.063838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.063856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.078743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.078761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.093551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.093570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.106622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.106641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.121915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.121934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.137728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.137747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.150612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.150630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.165274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.165293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.176337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.176355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.191496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.191514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.206501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.206519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.222189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.222208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.234430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.234448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.249309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.249327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.262397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.262416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.275293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.275311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.290562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.290581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.305280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.305299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.209 [2024-11-26 19:33:36.318908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.209 [2024-11-26 19:33:36.318927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.330167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.330185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.343114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.343133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.357912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.357932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.373577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.373596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.386166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.386185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.399685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.399703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.414955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.414974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.429835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.429854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.441105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.441123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.455900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.455919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.470503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.470522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.486005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.486023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.501087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.501105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.515266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.515286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.530261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.530280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.545634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.545652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.560005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.560023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.468 [2024-11-26 19:33:36.575014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.468 [2024-11-26 19:33:36.575032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 [2024-11-26 19:33:36.590154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.590173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 [2024-11-26 19:33:36.605787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.605810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 16433.00 IOPS, 128.38 MiB/s [2024-11-26T18:33:36.841Z] [2024-11-26 19:33:36.619324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.619343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 [2024-11-26 19:33:36.634404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.634424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 [2024-11-26 19:33:36.649142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.649161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 [2024-11-26 19:33:36.660602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.660621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 [2024-11-26 19:33:36.675497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.675514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 [2024-11-26 19:33:36.690480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.690499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.727 [2024-11-26 19:33:36.706147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.727 [2024-11-26 19:33:36.706166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.721757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.721776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.735493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.735512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.750597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.750616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.765369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.765388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.779343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.779361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.794622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.794641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.808944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.808963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.823286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.823307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.728 [2024-11-26 19:33:36.837834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.728 [2024-11-26 19:33:36.837854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.986 [2024-11-26 19:33:36.850285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.986 [2024-11-26 19:33:36.850303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.986 [2024-11-26 19:33:36.863378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.986 [2024-11-26 19:33:36.863396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.879094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.879115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.893654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.893681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.907705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.907723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.922587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.922608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.937976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.937996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.951426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.951446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.967473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.967494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.983285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.983305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:36.999544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:36.999564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:37.014480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:37.014500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:37.029181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:37.029201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:37.043290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:37.043310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:37.057778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:37.057797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:37.070364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:37.070383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:37.083534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:37.083553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.987 [2024-11-26 19:33:37.098657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.987 [2024-11-26 19:33:37.098682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.108938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.108956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.122995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.123014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.133209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.133228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.147901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.147925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.162540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.162560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.177043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.177064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.190816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.190837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.207859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.207881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.222493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.222514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.239397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.239417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.255427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.255446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.270362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.270381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.284974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.284993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.298693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.298712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.309969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.246 [2024-11-26 19:33:37.309987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.246 [2024-11-26 19:33:37.323243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.247 [2024-11-26 19:33:37.323262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.247 [2024-11-26 19:33:37.338030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.247 [2024-11-26 19:33:37.338048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.247 [2024-11-26 19:33:37.353383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.247 [2024-11-26 19:33:37.353403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.367548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.367567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.382037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.382056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.393487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.393506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.407839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.407858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.422779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.422801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.437475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.437494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.448862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.448881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.464142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.464164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.479179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.479201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.496734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.496757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.509016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.509037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.523445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.523464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.538361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.538379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.553131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.553150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.564516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.564534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.579341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.579360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.594238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.594256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.506 [2024-11-26 19:33:37.609514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.506 [2024-11-26 19:33:37.609532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 16417.67 IOPS, 128.26 MiB/s [2024-11-26T18:33:37.879Z] [2024-11-26 19:33:37.623791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.623809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.638643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.638660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.653366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.653385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.664681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.664699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.679415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.679433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.694637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.694657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.709262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.709280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.720711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.720731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.735664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.735693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.751653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.751681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.764809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.764830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.765 [2024-11-26 19:33:37.780023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.765 [2024-11-26 19:33:37.780046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.766 [2024-11-26 19:33:37.795541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.766 [2024-11-26 19:33:37.795561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.766 [2024-11-26 19:33:37.810811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.766 [2024-11-26 19:33:37.810830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.766 [2024-11-26 19:33:37.821487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.766 [2024-11-26 19:33:37.821505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.766 [2024-11-26 19:33:37.835903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.766 [2024-11-26 19:33:37.835921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.766 [2024-11-26 19:33:37.850804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.766 [2024-11-26 19:33:37.850822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.766 [2024-11-26 19:33:37.865419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.766 [2024-11-26 19:33:37.865438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.024 [2024-11-26 19:33:37.879804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.024 [2024-11-26 19:33:37.879823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:37.894605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:37.894623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:37.909894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:37.909913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:37.923336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:37.923355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:37.937881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:37.937900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:37.950527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:37.950545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:37.965270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:37.965289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:37.978796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:37.978813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:37.990206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:37.990226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.003037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.003056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.019186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.019206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.035232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.035251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.050134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.050152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.062399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.062417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.075690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.075708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.090679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.090698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.105284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.105304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.118901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.118919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.025 [2024-11-26 19:33:38.134011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.025 [2024-11-26 19:33:38.134030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.149306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.149325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.163639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.163657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.178729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.178748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.193268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.193286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.207990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.208009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.222630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.222653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.237817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.237836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.248885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.248904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.263788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.263805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.278608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.278626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.293562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.293582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.306924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.306944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.323847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.323869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.338786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.338807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.355437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.355458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.370478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.370499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.284 [2024-11-26 19:33:38.385351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.284 [2024-11-26 19:33:38.385370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.399628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.399649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.414586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.414605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.429338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.429358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.443600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.443620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.458477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.458495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.473505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.473524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.487361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.487380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.502053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.502077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.513682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.513703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.528181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.528202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.542626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.542647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.558346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.558367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.574513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.574534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.591758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.591780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.607191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.607211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 16368.50 IOPS, 127.88 MiB/s [2024-11-26T18:33:38.657Z] [2024-11-26 19:33:38.622638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.622657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.637695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.637715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.543 [2024-11-26 19:33:38.649145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.543 [2024-11-26 19:33:38.649164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.802 [2024-11-26 19:33:38.663188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.802 [2024-11-26 19:33:38.663207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.802 [2024-11-26 19:33:38.678159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.802 [2024-11-26 19:33:38.678178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.802 [2024-11-26 19:33:38.693157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.802 [2024-11-26 19:33:38.693177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.802 [2024-11-26 19:33:38.707293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.802 [2024-11-26 19:33:38.707312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.802 [2024-11-26 19:33:38.721821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.802 [2024-11-26 19:33:38.721841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.734450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.734468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.750252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.750271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.765226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.765249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.779089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.779111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.794099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.794117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.806691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.806710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.822684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.822704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.839301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.839321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.855412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.855434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.872061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.872083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.887268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.887287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.803 [2024-11-26 19:33:38.902348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.803 [2024-11-26 19:33:38.902366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:38.917849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:38.917868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:38.928593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:38.928612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:38.943319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:38.943338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:38.958212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:38.958231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:38.973626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:38.973646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:38.986207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:38.986226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:39.001400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:39.001420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:39.014282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:39.014301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:39.026854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:39.026874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:39.037925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:39.037943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:39.051613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:39.051633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:39.067236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:39.067256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.062 [2024-11-26 19:33:39.083894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.062 [2024-11-26 19:33:39.083916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.063 [2024-11-26 19:33:39.098984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.063 [2024-11-26 19:33:39.099007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.063 [2024-11-26 19:33:39.115929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.063 [2024-11-26 19:33:39.115949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.063 [2024-11-26 19:33:39.131724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.063 [2024-11-26 19:33:39.131743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.063 [2024-11-26 19:33:39.146462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.063 [2024-11-26 19:33:39.146481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.063 [2024-11-26 19:33:39.161340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.063 [2024-11-26 19:33:39.161358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.321 [2024-11-26 19:33:39.176069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.321 [2024-11-26 19:33:39.176089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.321 [2024-11-26 19:33:39.191408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.321 [2024-11-26 19:33:39.191428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.321 [2024-11-26 19:33:39.206234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.321 [2024-11-26 19:33:39.206255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.321 [2024-11-26 19:33:39.222149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.321 [2024-11-26 19:33:39.222168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.237608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.237628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.250190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.250208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.263613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.263633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.278755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.278774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.293459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.293478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.304808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.304826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.319237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.319256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.334851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.334870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.350747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.350766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.365663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.365689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.379896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.379916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.394772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.394792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.409517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.409538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.322 [2024-11-26 19:33:39.423616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.322 [2024-11-26 19:33:39.423636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.439608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.439627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.454534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.454553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.469691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.469710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.481000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.481019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.495365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.495384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.510024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.510043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.522138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.522156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.535465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.535485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.550730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.550749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.566275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.566294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.581651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.581675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.594996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.595016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.610784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.610802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 16349.00 IOPS, 127.73 MiB/s [2024-11-26T18:33:39.695Z] [2024-11-26 19:33:39.625455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.625473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 00:32:16.581 Latency(us) 00:32:16.581 [2024-11-26T18:33:39.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.581 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:16.581 Nvme1n1 : 5.01 16350.90 127.74 0.00 0.00 7820.74 2231.34 14667.58 00:32:16.581 [2024-11-26T18:33:39.695Z] =================================================================================================================== 00:32:16.581 [2024-11-26T18:33:39.695Z] Total : 16350.90 127.74 0.00 0.00 7820.74 2231.34 14667.58 00:32:16.581 [2024-11-26 19:33:39.633600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.633628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.645599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.645615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.657608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.657625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.669602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.669620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.581 [2024-11-26 19:33:39.681600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.581 [2024-11-26 19:33:39.681614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.693607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.693624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.705597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.705610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.717596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.717610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.729594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.729610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.741593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.741606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.753594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.753604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.765594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.765606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.777593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.777604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 [2024-11-26 19:33:39.789609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.845 [2024-11-26 19:33:39.789630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3969724) - No such process 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3969724 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.845 delay0 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.845 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.846 19:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:16.846 [2024-11-26 19:33:39.944556] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:24.969 Initializing NVMe Controllers 00:32:24.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.969 Initialization complete. Launching workers. 00:32:24.969 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 233, failed: 27787 00:32:24.969 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27898, failed to submit 122 00:32:24.969 success 27816, unsuccessful 82, failed 0 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.969 rmmod nvme_tcp 00:32:24.969 rmmod nvme_fabrics 00:32:24.969 rmmod nvme_keyring 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3966319 ']' 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3966319 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3966319 ']' 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3966319 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3966319 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3966319' 00:32:24.969 killing process with pid 3966319 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3966319 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3966319 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.969 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.348 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:26.348 00:32:26.348 real 0m32.258s 00:32:26.348 user 0m41.466s 00:32:26.348 sys 0m13.227s 00:32:26.348 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.348 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:26.348 ************************************ 00:32:26.348 END TEST nvmf_zcopy 00:32:26.348 ************************************ 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:26.607 ************************************ 00:32:26.607 START TEST nvmf_nmic 00:32:26.607 ************************************ 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:26.607 * Looking for test storage... 00:32:26.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:26.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.607 --rc genhtml_branch_coverage=1 00:32:26.607 --rc genhtml_function_coverage=1 00:32:26.607 --rc genhtml_legend=1 00:32:26.607 --rc geninfo_all_blocks=1 00:32:26.607 --rc geninfo_unexecuted_blocks=1 00:32:26.607 00:32:26.607 ' 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:26.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.607 --rc genhtml_branch_coverage=1 00:32:26.607 --rc genhtml_function_coverage=1 00:32:26.607 --rc genhtml_legend=1 00:32:26.607 --rc geninfo_all_blocks=1 00:32:26.607 --rc geninfo_unexecuted_blocks=1 00:32:26.607 00:32:26.607 ' 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:26.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.607 --rc genhtml_branch_coverage=1 00:32:26.607 --rc genhtml_function_coverage=1 00:32:26.607 --rc genhtml_legend=1 00:32:26.607 --rc geninfo_all_blocks=1 00:32:26.607 --rc geninfo_unexecuted_blocks=1 00:32:26.607 00:32:26.607 ' 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:26.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.607 --rc genhtml_branch_coverage=1 00:32:26.607 --rc genhtml_function_coverage=1 00:32:26.607 --rc genhtml_legend=1 00:32:26.607 --rc geninfo_all_blocks=1 00:32:26.607 --rc geninfo_unexecuted_blocks=1 00:32:26.607 00:32:26.607 ' 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.607 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.608 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:26.866 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:32.141 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:32.142 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:32.142 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:32.142 Found net devices under 0000:86:00.0: cvl_0_0 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:32.142 Found net devices under 0000:86:00.1: cvl_0_1 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.142 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:32:32.400 00:32:32.400 --- 10.0.0.2 ping statistics --- 00:32:32.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.400 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:32:32.400 00:32:32.400 --- 10.0.0.1 ping statistics --- 00:32:32.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.400 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.400 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.658 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3979938 00:32:32.658 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:32.658 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3979938 00:32:32.658 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3979938 ']' 00:32:32.658 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.658 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.659 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.659 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.659 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.659 [2024-11-26 19:33:55.560135] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.659 [2024-11-26 19:33:55.561062] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:32:32.659 [2024-11-26 19:33:55.561095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.659 [2024-11-26 19:33:55.643324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:32.659 [2024-11-26 19:33:55.689383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.659 [2024-11-26 19:33:55.689418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.659 [2024-11-26 19:33:55.689430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.659 [2024-11-26 19:33:55.689439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.659 [2024-11-26 19:33:55.689446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.659 [2024-11-26 19:33:55.691010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.659 [2024-11-26 19:33:55.691121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:32.659 [2024-11-26 19:33:55.691249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.659 [2024-11-26 19:33:55.691250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.659 [2024-11-26 19:33:55.760550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.659 [2024-11-26 19:33:55.761644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.659 [2024-11-26 19:33:55.761951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:32.659 [2024-11-26 19:33:55.762512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:32.659 [2024-11-26 19:33:55.762558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.917 [2024-11-26 19:33:55.828091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.917 Malloc0 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.917 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.917 [2024-11-26 19:33:55.908232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:32.918 test case1: single bdev can't be used in multiple subsystems 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.918 [2024-11-26 19:33:55.939721] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:32.918 [2024-11-26 19:33:55.939744] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:32.918 [2024-11-26 19:33:55.939755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.918 request: 00:32:32.918 { 00:32:32.918 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:32.918 "namespace": { 00:32:32.918 "bdev_name": "Malloc0", 00:32:32.918 "no_auto_visible": false 00:32:32.918 }, 00:32:32.918 "method": "nvmf_subsystem_add_ns", 00:32:32.918 "req_id": 1 00:32:32.918 } 00:32:32.918 Got JSON-RPC error response 00:32:32.918 response: 00:32:32.918 { 00:32:32.918 "code": -32602, 00:32:32.918 "message": "Invalid parameters" 00:32:32.918 } 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:32.918 Adding namespace failed - expected result. 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:32.918 test case2: host connect to nvmf target in multiple paths 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.918 [2024-11-26 19:33:55.951844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.918 19:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:33.176 19:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:33.433 19:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:33.433 19:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:33.433 19:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:33.433 19:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:33.434 19:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:35.333 19:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:35.333 19:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:35.333 19:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:35.333 19:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:35.333 19:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:35.333 19:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:35.333 19:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:35.333 [global] 00:32:35.333 thread=1 00:32:35.333 invalidate=1 00:32:35.333 rw=write 00:32:35.333 time_based=1 00:32:35.333 runtime=1 00:32:35.333 ioengine=libaio 00:32:35.333 direct=1 00:32:35.333 bs=4096 00:32:35.333 iodepth=1 00:32:35.333 norandommap=0 00:32:35.333 numjobs=1 00:32:35.333 00:32:35.333 verify_dump=1 00:32:35.333 verify_backlog=512 00:32:35.333 verify_state_save=0 00:32:35.333 do_verify=1 00:32:35.333 verify=crc32c-intel 00:32:35.333 [job0] 00:32:35.333 filename=/dev/nvme0n1 00:32:35.333 Could not set queue depth (nvme0n1) 00:32:35.591 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:35.591 fio-3.35 00:32:35.591 Starting 1 thread 00:32:36.964 00:32:36.964 job0: (groupid=0, jobs=1): err= 0: pid=3981063: Tue Nov 26 19:33:59 2024 00:32:36.964 read: IOPS=111, BW=444KiB/s (455kB/s)(448KiB/1008msec) 00:32:36.964 slat (nsec): min=3623, max=25326, avg=8293.43, stdev=7122.10 00:32:36.964 clat (usec): min=183, max=41957, avg=8265.54, stdev=16283.10 00:32:36.964 lat (usec): min=187, max=41981, avg=8273.83, stdev=16290.02 00:32:36.964 clat percentiles (usec): 00:32:36.964 | 1.00th=[ 200], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 247], 00:32:36.964 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 251], 60.00th=[ 253], 00:32:36.964 | 70.00th=[ 258], 80.00th=[ 420], 90.00th=[41157], 95.00th=[41157], 00:32:36.964 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:36.964 | 99.99th=[42206] 00:32:36.964 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:32:36.964 slat (nsec): min=10139, max=41971, avg=11319.89, stdev=2124.89 00:32:36.964 clat (usec): min=130, max=1436, avg=143.70, stdev=58.58 00:32:36.964 lat (usec): min=141, max=1447, avg=155.02, stdev=58.78 00:32:36.964 clat percentiles (usec): 00:32:36.964 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 137], 00:32:36.964 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:32:36.964 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 153], 00:32:36.964 | 99.00th=[ 172], 99.50th=[ 289], 99.90th=[ 1434], 99.95th=[ 1434], 00:32:36.964 | 99.99th=[ 1434] 00:32:36.964 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:32:36.964 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:32:36.964 lat (usec) : 250=87.98%, 500=8.33% 00:32:36.964 lat (msec) : 2=0.16%, 50=3.53% 00:32:36.964 cpu : usr=0.50%, sys=0.79%, ctx=624, majf=0, minf=1 00:32:36.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.964 issued rwts: total=112,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:36.964 00:32:36.964 Run status group 0 (all jobs): 00:32:36.964 READ: bw=444KiB/s (455kB/s), 444KiB/s-444KiB/s (455kB/s-455kB/s), io=448KiB (459kB), run=1008-1008msec 00:32:36.964 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:32:36.964 00:32:36.964 Disk stats (read/write): 00:32:36.964 nvme0n1: ios=159/512, merge=0/0, ticks=820/68, in_queue=888, util=91.68% 00:32:36.964 19:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:36.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.964 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.964 rmmod nvme_tcp 00:32:37.223 rmmod nvme_fabrics 00:32:37.223 rmmod nvme_keyring 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3979938 ']' 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3979938 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3979938 ']' 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3979938 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3979938 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3979938' 00:32:37.223 killing process with pid 3979938 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3979938 00:32:37.223 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3979938 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.482 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.387 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.387 00:32:39.387 real 0m12.934s 00:32:39.387 user 0m23.526s 00:32:39.387 sys 0m5.991s 00:32:39.387 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.387 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.387 ************************************ 00:32:39.387 END TEST nvmf_nmic 00:32:39.387 ************************************ 00:32:39.387 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:39.387 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:39.388 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.388 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.648 ************************************ 00:32:39.648 START TEST nvmf_fio_target 00:32:39.648 ************************************ 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:39.648 * Looking for test storage... 00:32:39.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:39.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.648 --rc genhtml_branch_coverage=1 00:32:39.648 --rc genhtml_function_coverage=1 00:32:39.648 --rc genhtml_legend=1 00:32:39.648 --rc geninfo_all_blocks=1 00:32:39.648 --rc geninfo_unexecuted_blocks=1 00:32:39.648 00:32:39.648 ' 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:39.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.648 --rc genhtml_branch_coverage=1 00:32:39.648 --rc genhtml_function_coverage=1 00:32:39.648 --rc genhtml_legend=1 00:32:39.648 --rc geninfo_all_blocks=1 00:32:39.648 --rc geninfo_unexecuted_blocks=1 00:32:39.648 00:32:39.648 ' 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:39.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.648 --rc genhtml_branch_coverage=1 00:32:39.648 --rc genhtml_function_coverage=1 00:32:39.648 --rc genhtml_legend=1 00:32:39.648 --rc geninfo_all_blocks=1 00:32:39.648 --rc geninfo_unexecuted_blocks=1 00:32:39.648 00:32:39.648 ' 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:39.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.648 --rc genhtml_branch_coverage=1 00:32:39.648 --rc genhtml_function_coverage=1 00:32:39.648 --rc genhtml_legend=1 00:32:39.648 --rc geninfo_all_blocks=1 00:32:39.648 --rc geninfo_unexecuted_blocks=1 00:32:39.648 00:32:39.648 ' 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.648 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.649 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.217 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:46.218 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:46.218 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:46.218 Found net devices under 0000:86:00.0: cvl_0_0 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:46.218 Found net devices under 0000:86:00.1: cvl_0_1 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:32:46.218 00:32:46.218 --- 10.0.0.2 ping statistics --- 00:32:46.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.218 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:32:46.218 00:32:46.218 --- 10.0.0.1 ping statistics --- 00:32:46.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.218 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3986275 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3986275 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3986275 ']' 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.218 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.219 [2024-11-26 19:34:08.517805] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:46.219 [2024-11-26 19:34:08.518731] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:32:46.219 [2024-11-26 19:34:08.518764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.219 [2024-11-26 19:34:08.601281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.219 [2024-11-26 19:34:08.644116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.219 [2024-11-26 19:34:08.644153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.219 [2024-11-26 19:34:08.644162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.219 [2024-11-26 19:34:08.644168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.219 [2024-11-26 19:34:08.644174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.219 [2024-11-26 19:34:08.649688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.219 [2024-11-26 19:34:08.649729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.219 [2024-11-26 19:34:08.649835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.219 [2024-11-26 19:34:08.649836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.219 [2024-11-26 19:34:08.718916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:46.219 [2024-11-26 19:34:08.719796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.219 [2024-11-26 19:34:08.720120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:46.219 [2024-11-26 19:34:08.720792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:46.219 [2024-11-26 19:34:08.720832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.219 19:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:46.219 [2024-11-26 19:34:08.970485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.219 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.219 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:46.219 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.477 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:46.478 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.736 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:46.736 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.995 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:46.995 19:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:46.995 19:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.253 19:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:47.253 19:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.537 19:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:47.537 19:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.795 19:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:47.796 19:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:47.796 19:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:48.053 19:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:48.054 19:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.311 19:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:48.311 19:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:48.570 19:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.570 [2024-11-26 19:34:11.642440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.570 19:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:48.828 19:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:49.086 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:49.344 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:49.344 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:49.344 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:49.344 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:49.345 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:49.345 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:51.249 19:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:51.249 19:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:51.249 19:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:51.249 19:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:51.508 19:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:51.508 19:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:51.508 19:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:51.508 [global] 00:32:51.508 thread=1 00:32:51.508 invalidate=1 00:32:51.508 rw=write 00:32:51.508 time_based=1 00:32:51.508 runtime=1 00:32:51.508 ioengine=libaio 00:32:51.508 direct=1 00:32:51.508 bs=4096 00:32:51.508 iodepth=1 00:32:51.508 norandommap=0 00:32:51.508 numjobs=1 00:32:51.508 00:32:51.508 verify_dump=1 00:32:51.508 verify_backlog=512 00:32:51.508 verify_state_save=0 00:32:51.508 do_verify=1 00:32:51.508 verify=crc32c-intel 00:32:51.508 [job0] 00:32:51.508 filename=/dev/nvme0n1 00:32:51.508 [job1] 00:32:51.508 filename=/dev/nvme0n2 00:32:51.508 [job2] 00:32:51.508 filename=/dev/nvme0n3 00:32:51.508 [job3] 00:32:51.508 filename=/dev/nvme0n4 00:32:51.508 Could not set queue depth (nvme0n1) 00:32:51.508 Could not set queue depth (nvme0n2) 00:32:51.508 Could not set queue depth (nvme0n3) 00:32:51.508 Could not set queue depth (nvme0n4) 00:32:51.766 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.766 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.766 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.766 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.766 fio-3.35 00:32:51.766 Starting 4 threads 00:32:53.151 00:32:53.151 job0: (groupid=0, jobs=1): err= 0: pid=3988742: Tue Nov 26 19:34:15 2024 00:32:53.151 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:32:53.151 slat (nsec): min=11870, max=24011, avg=22719.14, stdev=2608.63 00:32:53.151 clat (usec): min=40876, max=42205, avg=41069.46, stdev=334.86 00:32:53.151 lat (usec): min=40899, max=42217, avg=41092.17, stdev=332.57 00:32:53.151 clat percentiles (usec): 00:32:53.151 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:53.151 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:53.151 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:53.151 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:53.151 | 99.99th=[42206] 00:32:53.151 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:32:53.151 slat (nsec): min=9161, max=90058, avg=11288.97, stdev=4065.68 00:32:53.151 clat (usec): min=134, max=2558, avg=203.45, stdev=137.02 00:32:53.151 lat (usec): min=144, max=2569, avg=214.73, stdev=137.11 00:32:53.151 clat percentiles (usec): 00:32:53.151 | 1.00th=[ 141], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:32:53.151 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 190], 60.00th=[ 202], 00:32:53.151 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 249], 00:32:53.151 | 99.00th=[ 281], 99.50th=[ 359], 99.90th=[ 2573], 99.95th=[ 2573], 00:32:53.151 | 99.99th=[ 2573] 00:32:53.151 bw ( KiB/s): min= 4096, max= 4096, per=20.32%, avg=4096.00, stdev= 0.00, samples=1 00:32:53.151 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:53.151 lat (usec) : 250=91.20%, 500=4.31% 00:32:53.151 lat (msec) : 4=0.37%, 50=4.12% 00:32:53.151 cpu : usr=0.10%, sys=0.99%, ctx=535, majf=0, minf=1 00:32:53.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.151 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.151 job1: (groupid=0, jobs=1): err= 0: pid=3988756: Tue Nov 26 19:34:15 2024 00:32:53.151 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8200KiB/1012msec) 00:32:53.151 slat (nsec): min=7226, max=35286, avg=8504.27, stdev=1389.24 00:32:53.151 clat (usec): min=176, max=41626, avg=260.65, stdev=1286.67 00:32:53.151 lat (usec): min=185, max=41635, avg=269.16, stdev=1286.69 00:32:53.151 clat percentiles (usec): 00:32:53.151 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 194], 00:32:53.151 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 221], 00:32:53.151 | 70.00th=[ 237], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 277], 00:32:53.151 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 988], 99.95th=[41157], 00:32:53.151 | 99.99th=[41681] 00:32:53.151 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:32:53.151 slat (nsec): min=10626, max=52444, avg=12397.99, stdev=2166.69 00:32:53.151 clat (usec): min=128, max=1796, avg=160.26, stdev=54.16 00:32:53.151 lat (usec): min=140, max=1808, avg=172.66, stdev=54.40 00:32:53.151 clat percentiles (usec): 00:32:53.151 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:32:53.151 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 153], 00:32:53.151 | 70.00th=[ 159], 80.00th=[ 176], 90.00th=[ 200], 95.00th=[ 223], 00:32:53.151 | 99.00th=[ 251], 99.50th=[ 314], 99.90th=[ 857], 99.95th=[ 1385], 00:32:53.151 | 99.99th=[ 1795] 00:32:53.151 bw ( KiB/s): min= 9272, max=11208, per=50.80%, avg=10240.00, stdev=1368.96, samples=2 00:32:53.151 iops : min= 2318, max= 2802, avg=2560.00, stdev=342.24, samples=2 00:32:53.151 lat (usec) : 250=89.72%, 500=10.09%, 750=0.04%, 1000=0.07% 00:32:53.151 lat (msec) : 2=0.04%, 50=0.04% 00:32:53.151 cpu : usr=3.96%, sys=7.52%, ctx=4611, majf=0, minf=1 00:32:53.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.151 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.151 job2: (groupid=0, jobs=1): err= 0: pid=3988773: Tue Nov 26 19:34:15 2024 00:32:53.151 read: IOPS=80, BW=322KiB/s (330kB/s)(324KiB/1006msec) 00:32:53.151 slat (nsec): min=8792, max=25712, avg=13490.67, stdev=6082.27 00:32:53.151 clat (usec): min=241, max=41938, avg=10827.58, stdev=17966.54 00:32:53.151 lat (usec): min=250, max=41963, avg=10841.07, stdev=17972.16 00:32:53.151 clat percentiles (usec): 00:32:53.151 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:32:53.151 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:32:53.151 | 70.00th=[ 293], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:53.151 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:53.151 | 99.99th=[41681] 00:32:53.151 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:32:53.151 slat (nsec): min=9588, max=40160, avg=14446.54, stdev=3011.66 00:32:53.151 clat (usec): min=158, max=3472, avg=222.50, stdev=170.62 00:32:53.151 lat (usec): min=171, max=3485, avg=236.95, stdev=170.74 00:32:53.151 clat percentiles (usec): 00:32:53.151 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:32:53.151 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 210], 00:32:53.151 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 269], 00:32:53.151 | 99.00th=[ 367], 99.50th=[ 1565], 99.90th=[ 3458], 99.95th=[ 3458], 00:32:53.151 | 99.99th=[ 3458] 00:32:53.151 bw ( KiB/s): min= 4096, max= 4096, per=20.32%, avg=4096.00, stdev= 0.00, samples=1 00:32:53.151 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:53.151 lat (usec) : 250=79.60%, 500=16.36% 00:32:53.151 lat (msec) : 2=0.34%, 4=0.17%, 50=3.54% 00:32:53.151 cpu : usr=0.60%, sys=1.09%, ctx=595, majf=0, minf=1 00:32:53.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.151 issued rwts: total=81,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.151 job3: (groupid=0, jobs=1): err= 0: pid=3988778: Tue Nov 26 19:34:15 2024 00:32:53.151 read: IOPS=1115, BW=4464KiB/s (4571kB/s)(4468KiB/1001msec) 00:32:53.151 slat (nsec): min=7588, max=26971, avg=9981.59, stdev=1924.97 00:32:53.151 clat (usec): min=175, max=43929, avg=605.79, stdev=3887.90 00:32:53.151 lat (usec): min=184, max=43940, avg=615.77, stdev=3888.88 00:32:53.151 clat percentiles (usec): 00:32:53.151 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 217], 00:32:53.152 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:32:53.152 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 277], 00:32:53.152 | 99.00th=[ 482], 99.50th=[41157], 99.90th=[42206], 99.95th=[43779], 00:32:53.152 | 99.99th=[43779] 00:32:53.152 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:32:53.152 slat (nsec): min=10702, max=41532, avg=13781.60, stdev=2728.67 00:32:53.152 clat (usec): min=132, max=2911, avg=181.45, stdev=139.77 00:32:53.152 lat (usec): min=145, max=2924, avg=195.24, stdev=140.00 00:32:53.152 clat percentiles (usec): 00:32:53.152 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:32:53.152 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:32:53.152 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 217], 00:32:53.152 | 99.00th=[ 441], 99.50th=[ 955], 99.90th=[ 2180], 99.95th=[ 2900], 00:32:53.152 | 99.99th=[ 2900] 00:32:53.152 bw ( KiB/s): min= 8192, max= 8192, per=40.64%, avg=8192.00, stdev= 0.00, samples=1 00:32:53.152 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:53.152 lat (usec) : 250=85.98%, 500=13.08%, 750=0.08%, 1000=0.19% 00:32:53.152 lat (msec) : 2=0.11%, 4=0.19%, 50=0.38% 00:32:53.152 cpu : usr=2.30%, sys=4.90%, ctx=2656, majf=0, minf=1 00:32:53.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.152 issued rwts: total=1117,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.152 00:32:53.152 Run status group 0 (all jobs): 00:32:53.152 READ: bw=12.6MiB/s (13.2MB/s), 86.6KiB/s-8103KiB/s (88.7kB/s-8297kB/s), io=12.8MiB (13.4MB), run=1001-1016msec 00:32:53.152 WRITE: bw=19.7MiB/s (20.6MB/s), 2016KiB/s-9.88MiB/s (2064kB/s-10.4MB/s), io=20.0MiB (21.0MB), run=1001-1016msec 00:32:53.152 00:32:53.152 Disk stats (read/write): 00:32:53.152 nvme0n1: ios=67/512, merge=0/0, ticks=726/100, in_queue=826, util=86.47% 00:32:53.152 nvme0n2: ios=1969/2048, merge=0/0, ticks=667/303, in_queue=970, util=89.11% 00:32:53.152 nvme0n3: ios=106/512, merge=0/0, ticks=1582/106, in_queue=1688, util=93.42% 00:32:53.152 nvme0n4: ios=924/1024, merge=0/0, ticks=1521/183, in_queue=1704, util=93.89% 00:32:53.152 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:53.152 [global] 00:32:53.152 thread=1 00:32:53.152 invalidate=1 00:32:53.152 rw=randwrite 00:32:53.152 time_based=1 00:32:53.152 runtime=1 00:32:53.152 ioengine=libaio 00:32:53.152 direct=1 00:32:53.152 bs=4096 00:32:53.152 iodepth=1 00:32:53.152 norandommap=0 00:32:53.152 numjobs=1 00:32:53.152 00:32:53.152 verify_dump=1 00:32:53.152 verify_backlog=512 00:32:53.152 verify_state_save=0 00:32:53.152 do_verify=1 00:32:53.152 verify=crc32c-intel 00:32:53.152 [job0] 00:32:53.152 filename=/dev/nvme0n1 00:32:53.152 [job1] 00:32:53.152 filename=/dev/nvme0n2 00:32:53.152 [job2] 00:32:53.152 filename=/dev/nvme0n3 00:32:53.152 [job3] 00:32:53.152 filename=/dev/nvme0n4 00:32:53.152 Could not set queue depth (nvme0n1) 00:32:53.152 Could not set queue depth (nvme0n2) 00:32:53.152 Could not set queue depth (nvme0n3) 00:32:53.152 Could not set queue depth (nvme0n4) 00:32:53.409 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.410 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.410 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.410 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.410 fio-3.35 00:32:53.410 Starting 4 threads 00:32:54.779 00:32:54.779 job0: (groupid=0, jobs=1): err= 0: pid=3989543: Tue Nov 26 19:34:17 2024 00:32:54.779 read: IOPS=842, BW=3371KiB/s (3452kB/s)(3492KiB/1036msec) 00:32:54.779 slat (nsec): min=7175, max=37659, avg=8742.39, stdev=2433.77 00:32:54.779 clat (usec): min=213, max=42156, avg=919.87, stdev=5212.29 00:32:54.779 lat (usec): min=221, max=42167, avg=928.61, stdev=5212.95 00:32:54.779 clat percentiles (usec): 00:32:54.779 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:32:54.779 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:32:54.779 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 306], 00:32:54.779 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:32:54.779 | 99.99th=[42206] 00:32:54.779 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:32:54.779 slat (nsec): min=10035, max=49693, avg=12059.45, stdev=2659.47 00:32:54.779 clat (usec): min=137, max=1999, avg=201.84, stdev=72.35 00:32:54.779 lat (usec): min=150, max=2014, avg=213.90, stdev=72.67 00:32:54.779 clat percentiles (usec): 00:32:54.779 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:32:54.779 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 198], 00:32:54.779 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 245], 00:32:54.779 | 99.00th=[ 273], 99.50th=[ 310], 99.90th=[ 963], 99.95th=[ 2008], 00:32:54.779 | 99.99th=[ 2008] 00:32:54.779 bw ( KiB/s): min= 8192, max= 8192, per=51.80%, avg=8192.00, stdev= 0.00, samples=1 00:32:54.779 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:54.779 lat (usec) : 250=78.39%, 500=20.61%, 750=0.11%, 1000=0.11% 00:32:54.779 lat (msec) : 2=0.05%, 50=0.74% 00:32:54.779 cpu : usr=1.74%, sys=2.80%, ctx=1897, majf=0, minf=1 00:32:54.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.779 issued rwts: total=873,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.779 job1: (groupid=0, jobs=1): err= 0: pid=3989558: Tue Nov 26 19:34:17 2024 00:32:54.779 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:32:54.779 slat (nsec): min=12288, max=26689, avg=18210.68, stdev=4223.44 00:32:54.779 clat (usec): min=40389, max=41974, avg=41001.72, stdev=255.26 00:32:54.779 lat (usec): min=40401, max=42001, avg=41019.93, stdev=257.39 00:32:54.779 clat percentiles (usec): 00:32:54.779 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:54.779 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:54.779 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:54.779 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:54.779 | 99.99th=[42206] 00:32:54.779 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:32:54.779 slat (nsec): min=11073, max=55497, avg=14575.64, stdev=3477.81 00:32:54.779 clat (usec): min=135, max=2639, avg=208.26, stdev=117.28 00:32:54.779 lat (usec): min=150, max=2654, avg=222.83, stdev=117.45 00:32:54.779 clat percentiles (usec): 00:32:54.779 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:32:54.779 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:32:54.779 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 265], 00:32:54.780 | 99.00th=[ 322], 99.50th=[ 498], 99.90th=[ 2638], 99.95th=[ 2638], 00:32:54.780 | 99.99th=[ 2638] 00:32:54.780 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.780 lat (usec) : 250=87.64%, 500=7.87%, 1000=0.19% 00:32:54.780 lat (msec) : 4=0.19%, 50=4.12% 00:32:54.780 cpu : usr=0.39%, sys=1.08%, ctx=535, majf=0, minf=1 00:32:54.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.780 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.780 job2: (groupid=0, jobs=1): err= 0: pid=3989575: Tue Nov 26 19:34:17 2024 00:32:54.780 read: IOPS=44, BW=179KiB/s (183kB/s)(184KiB/1028msec) 00:32:54.780 slat (nsec): min=6777, max=23215, avg=14588.48, stdev=7555.61 00:32:54.780 clat (usec): min=231, max=41972, avg=19834.33, stdev=20624.76 00:32:54.780 lat (usec): min=238, max=41996, avg=19848.92, stdev=20631.07 00:32:54.780 clat percentiles (usec): 00:32:54.780 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 247], 00:32:54.780 | 30.00th=[ 251], 40.00th=[ 269], 50.00th=[ 685], 60.00th=[41157], 00:32:54.780 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:54.780 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:54.780 | 99.99th=[42206] 00:32:54.780 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:32:54.780 slat (nsec): min=9294, max=39907, avg=10623.41, stdev=2456.54 00:32:54.780 clat (usec): min=152, max=2574, avg=208.30, stdev=110.26 00:32:54.780 lat (usec): min=162, max=2585, avg=218.92, stdev=110.41 00:32:54.780 clat percentiles (usec): 00:32:54.780 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:32:54.780 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 204], 00:32:54.780 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:32:54.780 | 99.00th=[ 302], 99.50th=[ 408], 99.90th=[ 2573], 99.95th=[ 2573], 00:32:54.780 | 99.99th=[ 2573] 00:32:54.780 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.780 lat (usec) : 250=90.68%, 500=4.84%, 750=0.18%, 1000=0.18% 00:32:54.780 lat (msec) : 4=0.18%, 50=3.94% 00:32:54.780 cpu : usr=0.39%, sys=0.39%, ctx=559, majf=0, minf=1 00:32:54.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.780 issued rwts: total=46,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.780 job3: (groupid=0, jobs=1): err= 0: pid=3989580: Tue Nov 26 19:34:17 2024 00:32:54.780 read: IOPS=1781, BW=7125KiB/s (7296kB/s)(7360KiB/1033msec) 00:32:54.780 slat (nsec): min=6325, max=42460, avg=7384.49, stdev=1561.63 00:32:54.780 clat (usec): min=177, max=42027, avg=343.33, stdev=2332.76 00:32:54.780 lat (usec): min=191, max=42051, avg=350.71, stdev=2333.58 00:32:54.780 clat percentiles (usec): 00:32:54.780 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 194], 00:32:54.780 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 215], 00:32:54.780 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 237], 00:32:54.780 | 99.00th=[ 258], 99.50th=[ 302], 99.90th=[41157], 99.95th=[42206], 00:32:54.780 | 99.99th=[42206] 00:32:54.780 write: IOPS=1982, BW=7930KiB/s (8121kB/s)(8192KiB/1033msec); 0 zone resets 00:32:54.780 slat (nsec): min=9407, max=43448, avg=10831.10, stdev=1828.27 00:32:54.780 clat (usec): min=128, max=2002, avg=173.22, stdev=68.61 00:32:54.780 lat (usec): min=138, max=2011, avg=184.06, stdev=68.87 00:32:54.780 clat percentiles (usec): 00:32:54.780 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:32:54.780 | 30.00th=[ 147], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 174], 00:32:54.780 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 237], 95.00th=[ 249], 00:32:54.780 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 1369], 99.95th=[ 1500], 00:32:54.780 | 99.99th=[ 2008] 00:32:54.780 bw ( KiB/s): min= 5856, max=10528, per=51.80%, avg=8192.00, stdev=3303.60, samples=2 00:32:54.780 iops : min= 1464, max= 2632, avg=2048.00, stdev=825.90, samples=2 00:32:54.780 lat (usec) : 250=96.68%, 500=3.01%, 750=0.03%, 1000=0.05% 00:32:54.780 lat (msec) : 2=0.05%, 4=0.03%, 50=0.15% 00:32:54.780 cpu : usr=1.84%, sys=3.68%, ctx=3890, majf=0, minf=1 00:32:54.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.780 issued rwts: total=1840,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.780 00:32:54.780 Run status group 0 (all jobs): 00:32:54.780 READ: bw=10.5MiB/s (11.0MB/s), 86.3KiB/s-7125KiB/s (88.3kB/s-7296kB/s), io=10.9MiB (11.4MB), run=1020-1036msec 00:32:54.780 WRITE: bw=15.4MiB/s (16.2MB/s), 1992KiB/s-7930KiB/s (2040kB/s-8121kB/s), io=16.0MiB (16.8MB), run=1020-1036msec 00:32:54.780 00:32:54.780 Disk stats (read/write): 00:32:54.780 nvme0n1: ios=918/1024, merge=0/0, ticks=602/198, in_queue=800, util=86.57% 00:32:54.780 nvme0n2: ios=66/512, merge=0/0, ticks=1325/109, in_queue=1434, util=89.23% 00:32:54.780 nvme0n3: ios=67/512, merge=0/0, ticks=1609/102, in_queue=1711, util=93.32% 00:32:54.780 nvme0n4: ios=1899/2048, merge=0/0, ticks=743/337, in_queue=1080, util=95.05% 00:32:54.780 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:54.780 [global] 00:32:54.780 thread=1 00:32:54.780 invalidate=1 00:32:54.780 rw=write 00:32:54.780 time_based=1 00:32:54.780 runtime=1 00:32:54.780 ioengine=libaio 00:32:54.780 direct=1 00:32:54.780 bs=4096 00:32:54.780 iodepth=128 00:32:54.780 norandommap=0 00:32:54.780 numjobs=1 00:32:54.780 00:32:54.780 verify_dump=1 00:32:54.780 verify_backlog=512 00:32:54.780 verify_state_save=0 00:32:54.780 do_verify=1 00:32:54.780 verify=crc32c-intel 00:32:54.780 [job0] 00:32:54.780 filename=/dev/nvme0n1 00:32:54.780 [job1] 00:32:54.780 filename=/dev/nvme0n2 00:32:54.780 [job2] 00:32:54.780 filename=/dev/nvme0n3 00:32:54.780 [job3] 00:32:54.780 filename=/dev/nvme0n4 00:32:54.780 Could not set queue depth (nvme0n1) 00:32:54.780 Could not set queue depth (nvme0n2) 00:32:54.780 Could not set queue depth (nvme0n3) 00:32:54.780 Could not set queue depth (nvme0n4) 00:32:54.780 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:54.780 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:54.780 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:54.780 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:54.780 fio-3.35 00:32:54.780 Starting 4 threads 00:32:56.152 00:32:56.152 job0: (groupid=0, jobs=1): err= 0: pid=3990205: Tue Nov 26 19:34:19 2024 00:32:56.152 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:32:56.152 slat (nsec): min=1270, max=11122k, avg=92280.98, stdev=717900.75 00:32:56.152 clat (usec): min=3710, max=40806, avg=11585.90, stdev=4191.91 00:32:56.152 lat (usec): min=3721, max=40815, avg=11678.18, stdev=4248.48 00:32:56.152 clat percentiles (usec): 00:32:56.152 | 1.00th=[ 6390], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 9110], 00:32:56.152 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10814], 00:32:56.152 | 70.00th=[11994], 80.00th=[14222], 90.00th=[16188], 95.00th=[18482], 00:32:56.152 | 99.00th=[32113], 99.50th=[36439], 99.90th=[39584], 99.95th=[40633], 00:32:56.152 | 99.99th=[40633] 00:32:56.152 write: IOPS=5322, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1011msec); 0 zone resets 00:32:56.152 slat (usec): min=2, max=9713, avg=91.48, stdev=587.51 00:32:56.152 clat (usec): min=2128, max=64140, avg=12731.66, stdev=8224.36 00:32:56.152 lat (usec): min=2138, max=64148, avg=12823.14, stdev=8278.03 00:32:56.152 clat percentiles (usec): 00:32:56.152 | 1.00th=[ 5145], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 8225], 00:32:56.152 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10552], 00:32:56.152 | 70.00th=[12780], 80.00th=[15139], 90.00th=[20055], 95.00th=[31851], 00:32:56.152 | 99.00th=[46924], 99.50th=[52167], 99.90th=[64226], 99.95th=[64226], 00:32:56.152 | 99.99th=[64226] 00:32:56.152 bw ( KiB/s): min=18176, max=23856, per=30.25%, avg=21016.00, stdev=4016.37, samples=2 00:32:56.152 iops : min= 4544, max= 5964, avg=5254.00, stdev=1004.09, samples=2 00:32:56.152 lat (msec) : 4=0.38%, 10=46.36%, 20=46.82%, 50=6.07%, 100=0.37% 00:32:56.152 cpu : usr=4.95%, sys=6.04%, ctx=407, majf=0, minf=1 00:32:56.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:56.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.152 issued rwts: total=5120,5381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.152 job1: (groupid=0, jobs=1): err= 0: pid=3990214: Tue Nov 26 19:34:19 2024 00:32:56.152 read: IOPS=2600, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1004msec) 00:32:56.152 slat (nsec): min=1137, max=19616k, avg=199370.53, stdev=1350986.97 00:32:56.152 clat (usec): min=2729, max=77633, avg=25083.18, stdev=16463.35 00:32:56.152 lat (usec): min=7324, max=77647, avg=25282.55, stdev=16573.09 00:32:56.152 clat percentiles (usec): 00:32:56.152 | 1.00th=[ 7570], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[12125], 00:32:56.152 | 30.00th=[15664], 40.00th=[16057], 50.00th=[17433], 60.00th=[23462], 00:32:56.152 | 70.00th=[28181], 80.00th=[35390], 90.00th=[46400], 95.00th=[66323], 00:32:56.152 | 99.00th=[77071], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:32:56.152 | 99.99th=[78119] 00:32:56.152 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:32:56.152 slat (usec): min=2, max=14275, avg=150.59, stdev=896.22 00:32:56.152 clat (usec): min=2622, max=86164, avg=20073.72, stdev=13801.73 00:32:56.152 lat (usec): min=2632, max=86173, avg=20224.30, stdev=13863.93 00:32:56.152 clat percentiles (usec): 00:32:56.152 | 1.00th=[ 5407], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[10945], 00:32:56.152 | 30.00th=[12125], 40.00th=[12911], 50.00th=[15401], 60.00th=[17957], 00:32:56.152 | 70.00th=[21103], 80.00th=[26346], 90.00th=[38011], 95.00th=[44303], 00:32:56.152 | 99.00th=[82314], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:32:56.152 | 99.99th=[86508] 00:32:56.152 bw ( KiB/s): min=11264, max=12704, per=17.25%, avg=11984.00, stdev=1018.23, samples=2 00:32:56.152 iops : min= 2816, max= 3176, avg=2996.00, stdev=254.56, samples=2 00:32:56.153 lat (msec) : 4=0.33%, 10=7.69%, 20=51.03%, 50=34.49%, 100=6.46% 00:32:56.153 cpu : usr=1.69%, sys=3.69%, ctx=310, majf=0, minf=1 00:32:56.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:56.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.153 issued rwts: total=2611,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.153 job2: (groupid=0, jobs=1): err= 0: pid=3990215: Tue Nov 26 19:34:19 2024 00:32:56.153 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:32:56.153 slat (nsec): min=1114, max=20937k, avg=111504.52, stdev=853485.50 00:32:56.153 clat (usec): min=6374, max=63255, avg=14229.91, stdev=5503.53 00:32:56.153 lat (usec): min=6381, max=63258, avg=14341.41, stdev=5554.22 00:32:56.153 clat percentiles (usec): 00:32:56.153 | 1.00th=[ 6849], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10552], 00:32:56.153 | 30.00th=[11338], 40.00th=[12256], 50.00th=[12780], 60.00th=[13435], 00:32:56.153 | 70.00th=[14353], 80.00th=[16188], 90.00th=[21103], 95.00th=[26084], 00:32:56.153 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[38011], 00:32:56.153 | 99.99th=[63177] 00:32:56.153 write: IOPS=4431, BW=17.3MiB/s (18.1MB/s)(17.5MiB/1011msec); 0 zone resets 00:32:56.153 slat (nsec): min=1988, max=18316k, avg=114953.91, stdev=751972.13 00:32:56.153 clat (usec): min=1541, max=52792, avg=15549.50, stdev=9153.70 00:32:56.153 lat (usec): min=1554, max=52826, avg=15664.45, stdev=9207.52 00:32:56.153 clat percentiles (usec): 00:32:56.153 | 1.00th=[ 4555], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[10421], 00:32:56.153 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12125], 60.00th=[13042], 00:32:56.153 | 70.00th=[15401], 80.00th=[19530], 90.00th=[25297], 95.00th=[36439], 00:32:56.153 | 99.00th=[50070], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:32:56.153 | 99.99th=[52691] 00:32:56.153 bw ( KiB/s): min=14336, max=20480, per=25.06%, avg=17408.00, stdev=4344.46, samples=2 00:32:56.153 iops : min= 3584, max= 5120, avg=4352.00, stdev=1086.12, samples=2 00:32:56.153 lat (msec) : 2=0.13%, 4=0.29%, 10=15.31%, 20=68.77%, 50=15.02% 00:32:56.153 lat (msec) : 100=0.48% 00:32:56.153 cpu : usr=2.67%, sys=5.25%, ctx=367, majf=0, minf=1 00:32:56.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:56.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.153 issued rwts: total=4096,4480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.153 job3: (groupid=0, jobs=1): err= 0: pid=3990216: Tue Nov 26 19:34:19 2024 00:32:56.153 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:32:56.153 slat (nsec): min=1216, max=16384k, avg=98444.69, stdev=759294.76 00:32:56.153 clat (usec): min=2536, max=41883, avg=13637.16, stdev=4909.06 00:32:56.153 lat (usec): min=2542, max=41892, avg=13735.61, stdev=4961.39 00:32:56.153 clat percentiles (usec): 00:32:56.153 | 1.00th=[ 4047], 5.00th=[ 7570], 10.00th=[ 8717], 20.00th=[10552], 00:32:56.153 | 30.00th=[11076], 40.00th=[12387], 50.00th=[12911], 60.00th=[13698], 00:32:56.153 | 70.00th=[14615], 80.00th=[16450], 90.00th=[18744], 95.00th=[24249], 00:32:56.153 | 99.00th=[32637], 99.50th=[37487], 99.90th=[41681], 99.95th=[41681], 00:32:56.153 | 99.99th=[41681] 00:32:56.153 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.1MiB/1007msec); 0 zone resets 00:32:56.153 slat (nsec): min=1932, max=13570k, avg=95465.90, stdev=657081.35 00:32:56.153 clat (usec): min=703, max=52587, avg=13867.11, stdev=8653.68 00:32:56.153 lat (usec): min=893, max=52594, avg=13962.57, stdev=8709.91 00:32:56.153 clat percentiles (usec): 00:32:56.153 | 1.00th=[ 3228], 5.00th=[ 5604], 10.00th=[ 6980], 20.00th=[ 8160], 00:32:56.153 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11076], 60.00th=[12125], 00:32:56.153 | 70.00th=[12911], 80.00th=[17957], 90.00th=[27395], 95.00th=[32113], 00:32:56.153 | 99.00th=[46924], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:32:56.153 | 99.99th=[52691] 00:32:56.153 bw ( KiB/s): min=16944, max=19920, per=26.53%, avg=18432.00, stdev=2104.35, samples=2 00:32:56.153 iops : min= 4236, max= 4980, avg=4608.00, stdev=526.09, samples=2 00:32:56.153 lat (usec) : 750=0.01%, 1000=0.10% 00:32:56.153 lat (msec) : 2=0.21%, 4=1.56%, 10=24.73%, 20=61.16%, 50=12.01% 00:32:56.153 lat (msec) : 100=0.23% 00:32:56.153 cpu : usr=2.78%, sys=5.77%, ctx=351, majf=0, minf=1 00:32:56.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:56.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.153 issued rwts: total=4608,4627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.153 00:32:56.153 Run status group 0 (all jobs): 00:32:56.153 READ: bw=63.5MiB/s (66.6MB/s), 10.2MiB/s-19.8MiB/s (10.7MB/s-20.7MB/s), io=64.2MiB (67.3MB), run=1004-1011msec 00:32:56.153 WRITE: bw=67.8MiB/s (71.1MB/s), 12.0MiB/s-20.8MiB/s (12.5MB/s-21.8MB/s), io=68.6MiB (71.9MB), run=1004-1011msec 00:32:56.153 00:32:56.153 Disk stats (read/write): 00:32:56.153 nvme0n1: ios=4660/4959, merge=0/0, ticks=49967/54485, in_queue=104452, util=97.90% 00:32:56.153 nvme0n2: ios=2083/2463, merge=0/0, ticks=19291/18527, in_queue=37818, util=97.46% 00:32:56.153 nvme0n3: ios=3609/3654, merge=0/0, ticks=36043/35590, in_queue=71633, util=98.13% 00:32:56.153 nvme0n4: ios=3626/3826, merge=0/0, ticks=43251/49370, in_queue=92621, util=97.90% 00:32:56.153 19:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:56.153 [global] 00:32:56.153 thread=1 00:32:56.153 invalidate=1 00:32:56.153 rw=randwrite 00:32:56.153 time_based=1 00:32:56.153 runtime=1 00:32:56.153 ioengine=libaio 00:32:56.153 direct=1 00:32:56.153 bs=4096 00:32:56.153 iodepth=128 00:32:56.153 norandommap=0 00:32:56.153 numjobs=1 00:32:56.153 00:32:56.153 verify_dump=1 00:32:56.153 verify_backlog=512 00:32:56.153 verify_state_save=0 00:32:56.153 do_verify=1 00:32:56.153 verify=crc32c-intel 00:32:56.153 [job0] 00:32:56.153 filename=/dev/nvme0n1 00:32:56.153 [job1] 00:32:56.153 filename=/dev/nvme0n2 00:32:56.153 [job2] 00:32:56.153 filename=/dev/nvme0n3 00:32:56.153 [job3] 00:32:56.153 filename=/dev/nvme0n4 00:32:56.153 Could not set queue depth (nvme0n1) 00:32:56.153 Could not set queue depth (nvme0n2) 00:32:56.153 Could not set queue depth (nvme0n3) 00:32:56.153 Could not set queue depth (nvme0n4) 00:32:56.412 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.412 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.412 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.412 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.412 fio-3.35 00:32:56.412 Starting 4 threads 00:32:57.789 00:32:57.789 job0: (groupid=0, jobs=1): err= 0: pid=3990845: Tue Nov 26 19:34:20 2024 00:32:57.789 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:32:57.789 slat (nsec): min=1131, max=22858k, avg=122373.59, stdev=943623.28 00:32:57.789 clat (usec): min=2864, max=56217, avg=16669.16, stdev=9882.62 00:32:57.789 lat (usec): min=2872, max=63743, avg=16791.53, stdev=9958.87 00:32:57.789 clat percentiles (usec): 00:32:57.789 | 1.00th=[ 3097], 5.00th=[ 6849], 10.00th=[ 7504], 20.00th=[10159], 00:32:57.789 | 30.00th=[10945], 40.00th=[11207], 50.00th=[12125], 60.00th=[16057], 00:32:57.789 | 70.00th=[19006], 80.00th=[24511], 90.00th=[28967], 95.00th=[40633], 00:32:57.789 | 99.00th=[50070], 99.50th=[53740], 99.90th=[53740], 99.95th=[56361], 00:32:57.789 | 99.99th=[56361] 00:32:57.789 write: IOPS=4122, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1009msec); 0 zone resets 00:32:57.789 slat (nsec): min=1858, max=22271k, avg=105129.14, stdev=932765.70 00:32:57.789 clat (usec): min=904, max=72700, avg=14263.05, stdev=10831.73 00:32:57.789 lat (usec): min=912, max=72731, avg=14368.18, stdev=10913.45 00:32:57.789 clat percentiles (usec): 00:32:57.789 | 1.00th=[ 1745], 5.00th=[ 4015], 10.00th=[ 5604], 20.00th=[ 7242], 00:32:57.789 | 30.00th=[ 8225], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[12649], 00:32:57.790 | 70.00th=[15533], 80.00th=[17957], 90.00th=[27395], 95.00th=[36439], 00:32:57.790 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:32:57.790 | 99.99th=[72877] 00:32:57.790 bw ( KiB/s): min=16144, max=16624, per=26.12%, avg=16384.00, stdev=339.41, samples=2 00:32:57.790 iops : min= 4036, max= 4156, avg=4096.00, stdev=84.85, samples=2 00:32:57.790 lat (usec) : 1000=0.13% 00:32:57.790 lat (msec) : 2=0.53%, 4=2.64%, 10=26.76%, 20=47.80%, 50=20.23% 00:32:57.790 lat (msec) : 100=1.91% 00:32:57.790 cpu : usr=2.58%, sys=5.16%, ctx=289, majf=0, minf=1 00:32:57.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:57.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.790 issued rwts: total=4096,4160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.790 job1: (groupid=0, jobs=1): err= 0: pid=3990846: Tue Nov 26 19:34:20 2024 00:32:57.790 read: IOPS=5697, BW=22.3MiB/s (23.3MB/s)(23.3MiB/1049msec) 00:32:57.790 slat (nsec): min=1071, max=12968k, avg=78899.29, stdev=595270.43 00:32:57.790 clat (usec): min=3627, max=61567, avg=11597.35, stdev=7837.89 00:32:57.790 lat (usec): min=3633, max=61571, avg=11676.25, stdev=7864.41 00:32:57.790 clat percentiles (usec): 00:32:57.790 | 1.00th=[ 4359], 5.00th=[ 5735], 10.00th=[ 6980], 20.00th=[ 7570], 00:32:57.790 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10421], 00:32:57.790 | 70.00th=[11600], 80.00th=[13304], 90.00th=[16450], 95.00th=[22152], 00:32:57.790 | 99.00th=[52691], 99.50th=[52691], 99.90th=[61604], 99.95th=[61604], 00:32:57.790 | 99.99th=[61604] 00:32:57.790 write: IOPS=5857, BW=22.9MiB/s (24.0MB/s)(24.0MiB/1049msec); 0 zone resets 00:32:57.790 slat (nsec): min=1825, max=9294.0k, avg=80712.16, stdev=573701.18 00:32:57.790 clat (usec): min=1830, max=30589, avg=10274.42, stdev=5025.41 00:32:57.790 lat (usec): min=1840, max=30632, avg=10355.13, stdev=5072.92 00:32:57.790 clat percentiles (usec): 00:32:57.790 | 1.00th=[ 3326], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 7504], 00:32:57.790 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9765], 00:32:57.790 | 70.00th=[10552], 80.00th=[11863], 90.00th=[18220], 95.00th=[22414], 00:32:57.790 | 99.00th=[27919], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:32:57.790 | 99.99th=[30540] 00:32:57.790 bw ( KiB/s): min=23024, max=26128, per=39.18%, avg=24576.00, stdev=2194.86, samples=2 00:32:57.790 iops : min= 5756, max= 6532, avg=6144.00, stdev=548.71, samples=2 00:32:57.790 lat (msec) : 2=0.15%, 4=0.97%, 10=58.80%, 20=32.93%, 50=6.12% 00:32:57.790 lat (msec) : 100=1.04% 00:32:57.790 cpu : usr=3.43%, sys=6.58%, ctx=445, majf=0, minf=1 00:32:57.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:57.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.790 issued rwts: total=5977,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.790 job2: (groupid=0, jobs=1): err= 0: pid=3990847: Tue Nov 26 19:34:20 2024 00:32:57.790 read: IOPS=2983, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1007msec) 00:32:57.790 slat (nsec): min=1631, max=28035k, avg=178042.86, stdev=1405947.83 00:32:57.790 clat (usec): min=4389, max=77656, avg=22968.03, stdev=12046.23 00:32:57.790 lat (usec): min=8768, max=80655, avg=23146.07, stdev=12190.58 00:32:57.790 clat percentiles (usec): 00:32:57.790 | 1.00th=[ 8848], 5.00th=[11207], 10.00th=[11994], 20.00th=[14746], 00:32:57.790 | 30.00th=[15926], 40.00th=[17171], 50.00th=[18744], 60.00th=[20055], 00:32:57.790 | 70.00th=[25297], 80.00th=[30540], 90.00th=[41681], 95.00th=[49546], 00:32:57.790 | 99.00th=[62129], 99.50th=[70779], 99.90th=[78119], 99.95th=[78119], 00:32:57.790 | 99.99th=[78119] 00:32:57.790 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:32:57.790 slat (usec): min=2, max=22219, avg=132.89, stdev=1102.97 00:32:57.790 clat (usec): min=1994, max=72794, avg=19047.11, stdev=10092.91 00:32:57.790 lat (usec): min=2000, max=72807, avg=19180.00, stdev=10197.57 00:32:57.790 clat percentiles (usec): 00:32:57.790 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[11207], 20.00th=[11863], 00:32:57.790 | 30.00th=[13304], 40.00th=[14222], 50.00th=[15795], 60.00th=[17433], 00:32:57.790 | 70.00th=[18482], 80.00th=[23725], 90.00th=[33817], 95.00th=[43254], 00:32:57.790 | 99.00th=[53740], 99.50th=[53740], 99.90th=[72877], 99.95th=[72877], 00:32:57.790 | 99.99th=[72877] 00:32:57.790 bw ( KiB/s): min=11536, max=13040, per=19.59%, avg=12288.00, stdev=1063.49, samples=2 00:32:57.790 iops : min= 2884, max= 3260, avg=3072.00, stdev=265.87, samples=2 00:32:57.790 lat (msec) : 2=0.05%, 4=0.15%, 10=2.67%, 20=64.14%, 50=30.15% 00:32:57.790 lat (msec) : 100=2.85% 00:32:57.790 cpu : usr=2.49%, sys=4.97%, ctx=188, majf=0, minf=1 00:32:57.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:57.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.790 issued rwts: total=3004,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.790 job3: (groupid=0, jobs=1): err= 0: pid=3990848: Tue Nov 26 19:34:20 2024 00:32:57.790 read: IOPS=2783, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1007msec) 00:32:57.790 slat (nsec): min=1599, max=21826k, avg=143015.53, stdev=1036018.91 00:32:57.790 clat (usec): min=5177, max=67525, avg=17902.75, stdev=8157.06 00:32:57.790 lat (usec): min=6874, max=67534, avg=18045.77, stdev=8238.49 00:32:57.790 clat percentiles (usec): 00:32:57.790 | 1.00th=[ 8029], 5.00th=[ 9896], 10.00th=[11731], 20.00th=[13042], 00:32:57.790 | 30.00th=[14091], 40.00th=[15008], 50.00th=[16909], 60.00th=[17433], 00:32:57.790 | 70.00th=[18220], 80.00th=[21103], 90.00th=[25560], 95.00th=[28967], 00:32:57.790 | 99.00th=[56886], 99.50th=[62129], 99.90th=[67634], 99.95th=[67634], 00:32:57.790 | 99.99th=[67634] 00:32:57.790 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:32:57.790 slat (usec): min=2, max=17254, avg=186.97, stdev=1027.38 00:32:57.790 clat (usec): min=2621, max=73976, avg=25002.87, stdev=18613.44 00:32:57.790 lat (usec): min=2633, max=73988, avg=25189.83, stdev=18736.89 00:32:57.790 clat percentiles (usec): 00:32:57.790 | 1.00th=[ 3064], 5.00th=[ 8225], 10.00th=[10814], 20.00th=[11994], 00:32:57.790 | 30.00th=[13698], 40.00th=[14615], 50.00th=[16712], 60.00th=[18744], 00:32:57.790 | 70.00th=[24773], 80.00th=[40633], 90.00th=[61080], 95.00th=[67634], 00:32:57.790 | 99.00th=[71828], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:32:57.790 | 99.99th=[73925] 00:32:57.790 bw ( KiB/s): min=12280, max=12296, per=19.59%, avg=12288.00, stdev=11.31, samples=2 00:32:57.790 iops : min= 3070, max= 3074, avg=3072.00, stdev= 2.83, samples=2 00:32:57.790 lat (msec) : 4=0.65%, 10=6.06%, 20=62.59%, 50=21.58%, 100=9.12% 00:32:57.790 cpu : usr=2.58%, sys=4.17%, ctx=294, majf=0, minf=2 00:32:57.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:32:57.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.790 issued rwts: total=2803,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.790 00:32:57.790 Run status group 0 (all jobs): 00:32:57.790 READ: bw=59.1MiB/s (62.0MB/s), 10.9MiB/s-22.3MiB/s (11.4MB/s-23.3MB/s), io=62.0MiB (65.0MB), run=1007-1049msec 00:32:57.790 WRITE: bw=61.2MiB/s (64.2MB/s), 11.9MiB/s-22.9MiB/s (12.5MB/s-24.0MB/s), io=64.2MiB (67.4MB), run=1007-1049msec 00:32:57.790 00:32:57.790 Disk stats (read/write): 00:32:57.790 nvme0n1: ios=3123/3315, merge=0/0, ticks=29432/29662, in_queue=59094, util=96.19% 00:32:57.790 nvme0n2: ios=4747/5120, merge=0/0, ticks=31539/32774, in_queue=64313, util=98.36% 00:32:57.790 nvme0n3: ios=2048/2527, merge=0/0, ticks=25184/23215, in_queue=48399, util=87.54% 00:32:57.790 nvme0n4: ios=2048/2255, merge=0/0, ticks=37746/55828, in_queue=93574, util=89.08% 00:32:57.790 19:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:57.790 19:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3991336 00:32:57.790 19:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:57.790 19:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:57.790 [global] 00:32:57.790 thread=1 00:32:57.790 invalidate=1 00:32:57.790 rw=read 00:32:57.790 time_based=1 00:32:57.790 runtime=10 00:32:57.790 ioengine=libaio 00:32:57.790 direct=1 00:32:57.790 bs=4096 00:32:57.790 iodepth=1 00:32:57.790 norandommap=1 00:32:57.790 numjobs=1 00:32:57.790 00:32:57.790 [job0] 00:32:57.790 filename=/dev/nvme0n1 00:32:57.790 [job1] 00:32:57.790 filename=/dev/nvme0n2 00:32:57.790 [job2] 00:32:57.790 filename=/dev/nvme0n3 00:32:57.790 [job3] 00:32:57.790 filename=/dev/nvme0n4 00:32:57.790 Could not set queue depth (nvme0n1) 00:32:57.790 Could not set queue depth (nvme0n2) 00:32:57.790 Could not set queue depth (nvme0n3) 00:32:57.790 Could not set queue depth (nvme0n4) 00:32:58.049 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.049 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.049 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.049 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.049 fio-3.35 00:32:58.049 Starting 4 threads 00:33:01.332 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:01.332 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:01.332 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1146880, buflen=4096 00:33:01.332 fio: pid=3991616, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:01.332 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.332 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:01.332 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1146880, buflen=4096 00:33:01.332 fio: pid=3991613, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:01.332 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=33153024, buflen=4096 00:33:01.332 fio: pid=3991610, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:01.332 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.332 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:01.590 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.590 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:01.590 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=856064, buflen=4096 00:33:01.590 fio: pid=3991611, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:01.590 00:33:01.590 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3991610: Tue Nov 26 19:34:24 2024 00:33:01.590 read: IOPS=2589, BW=10.1MiB/s (10.6MB/s)(31.6MiB/3126msec) 00:33:01.590 slat (usec): min=6, max=13632, avg=10.10, stdev=184.89 00:33:01.590 clat (usec): min=166, max=41961, avg=373.83, stdev=2681.81 00:33:01.590 lat (usec): min=178, max=41983, avg=383.93, stdev=2689.09 00:33:01.590 clat percentiles (usec): 00:33:01.590 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:33:01.590 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:33:01.590 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 262], 00:33:01.590 | 99.00th=[ 424], 99.50th=[ 1237], 99.90th=[41157], 99.95th=[41157], 00:33:01.590 | 99.99th=[42206] 00:33:01.590 bw ( KiB/s): min= 96, max=20296, per=97.00%, avg=10174.00, stdev=8629.75, samples=6 00:33:01.590 iops : min= 24, max= 5074, avg=2543.50, stdev=2157.44, samples=6 00:33:01.590 lat (usec) : 250=93.70%, 500=5.68%, 750=0.05%, 1000=0.02% 00:33:01.590 lat (msec) : 2=0.09%, 4=0.01%, 50=0.43% 00:33:01.590 cpu : usr=0.58%, sys=2.37%, ctx=8097, majf=0, minf=2 00:33:01.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.590 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.590 issued rwts: total=8095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.590 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3991611: Tue Nov 26 19:34:24 2024 00:33:01.590 read: IOPS=62, BW=247KiB/s (253kB/s)(836KiB/3380msec) 00:33:01.590 slat (usec): min=6, max=10832, avg=107.60, stdev=962.26 00:33:01.590 clat (usec): min=232, max=44987, avg=15949.48, stdev=19892.33 00:33:01.590 lat (usec): min=239, max=51920, avg=16057.48, stdev=20041.77 00:33:01.590 clat percentiles (usec): 00:33:01.590 | 1.00th=[ 262], 5.00th=[ 265], 10.00th=[ 265], 20.00th=[ 269], 00:33:01.590 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 955], 00:33:01.590 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:01.590 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:33:01.590 | 99.99th=[44827] 00:33:01.590 bw ( KiB/s): min= 96, max= 928, per=2.27%, avg=238.67, stdev=337.73, samples=6 00:33:01.590 iops : min= 24, max= 232, avg=59.67, stdev=84.43, samples=6 00:33:01.590 lat (usec) : 250=0.48%, 500=56.19%, 750=1.90%, 1000=1.90% 00:33:01.590 lat (msec) : 2=0.48%, 4=0.48%, 50=38.10% 00:33:01.590 cpu : usr=0.00%, sys=0.15%, ctx=215, majf=0, minf=2 00:33:01.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.590 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.590 issued rwts: total=210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.590 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3991613: Tue Nov 26 19:34:24 2024 00:33:01.590 read: IOPS=94, BW=378KiB/s (387kB/s)(1120KiB/2961msec) 00:33:01.590 slat (nsec): min=6153, max=58118, avg=11911.09, stdev=6550.52 00:33:01.590 clat (usec): min=186, max=42028, avg=10481.61, stdev=17694.55 00:33:01.590 lat (usec): min=193, max=42049, avg=10493.47, stdev=17699.46 00:33:01.590 clat percentiles (usec): 00:33:01.590 | 1.00th=[ 190], 5.00th=[ 221], 10.00th=[ 247], 20.00th=[ 255], 00:33:01.591 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 297], 00:33:01.591 | 70.00th=[ 375], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:01.591 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:01.591 | 99.99th=[42206] 00:33:01.591 bw ( KiB/s): min= 96, max= 1640, per=4.08%, avg=428.80, stdev=678.74, samples=5 00:33:01.591 iops : min= 24, max= 410, avg=107.20, stdev=169.69, samples=5 00:33:01.591 lat (usec) : 250=12.10%, 500=61.92%, 750=0.36% 00:33:01.591 lat (msec) : 2=0.36%, 50=24.91% 00:33:01.591 cpu : usr=0.03%, sys=0.17%, ctx=282, majf=0, minf=1 00:33:01.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.591 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.591 issued rwts: total=281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.591 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3991616: Tue Nov 26 19:34:24 2024 00:33:01.591 read: IOPS=102, BW=407KiB/s (417kB/s)(1120KiB/2753msec) 00:33:01.591 slat (nsec): min=7267, max=37300, avg=12666.86, stdev=6241.03 00:33:01.591 clat (usec): min=206, max=42092, avg=9723.18, stdev=17243.76 00:33:01.591 lat (usec): min=215, max=42117, avg=9735.82, stdev=17249.44 00:33:01.591 clat percentiles (usec): 00:33:01.591 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:33:01.591 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 262], 00:33:01.591 | 70.00th=[ 371], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:33:01.591 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:01.591 | 99.99th=[42206] 00:33:01.591 bw ( KiB/s): min= 96, max= 1800, per=4.18%, avg=438.40, stdev=761.17, samples=5 00:33:01.591 iops : min= 24, max= 450, avg=109.60, stdev=190.29, samples=5 00:33:01.591 lat (usec) : 250=49.82%, 500=25.98%, 750=0.71% 00:33:01.591 lat (msec) : 50=23.13% 00:33:01.591 cpu : usr=0.07%, sys=0.22%, ctx=281, majf=0, minf=2 00:33:01.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.591 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.591 issued rwts: total=281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.591 00:33:01.591 Run status group 0 (all jobs): 00:33:01.591 READ: bw=10.2MiB/s (10.7MB/s), 247KiB/s-10.1MiB/s (253kB/s-10.6MB/s), io=34.6MiB (36.3MB), run=2753-3380msec 00:33:01.591 00:33:01.591 Disk stats (read/write): 00:33:01.591 nvme0n1: ios=8028/0, merge=0/0, ticks=2966/0, in_queue=2966, util=94.98% 00:33:01.591 nvme0n2: ios=245/0, merge=0/0, ticks=4266/0, in_queue=4266, util=99.37% 00:33:01.591 nvme0n3: ios=277/0, merge=0/0, ticks=2812/0, in_queue=2812, util=96.52% 00:33:01.591 nvme0n4: ios=276/0, merge=0/0, ticks=2556/0, in_queue=2556, util=96.44% 00:33:01.847 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.847 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:02.105 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.105 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:02.362 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.363 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:02.363 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.363 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:02.620 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:02.620 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3991336 00:33:02.620 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:02.620 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:02.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:02.878 nvmf hotplug test: fio failed as expected 00:33:02.878 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:03.136 rmmod nvme_tcp 00:33:03.136 rmmod nvme_fabrics 00:33:03.136 rmmod nvme_keyring 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3986275 ']' 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3986275 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3986275 ']' 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3986275 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3986275 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3986275' 00:33:03.136 killing process with pid 3986275 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3986275 00:33:03.136 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3986275 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.395 19:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.299 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.299 00:33:05.299 real 0m25.886s 00:33:05.299 user 1m31.052s 00:33:05.299 sys 0m10.859s 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.559 ************************************ 00:33:05.559 END TEST nvmf_fio_target 00:33:05.559 ************************************ 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:05.559 ************************************ 00:33:05.559 START TEST nvmf_bdevio 00:33:05.559 ************************************ 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:05.559 * Looking for test storage... 00:33:05.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.559 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.560 --rc genhtml_branch_coverage=1 00:33:05.560 --rc genhtml_function_coverage=1 00:33:05.560 --rc genhtml_legend=1 00:33:05.560 --rc geninfo_all_blocks=1 00:33:05.560 --rc geninfo_unexecuted_blocks=1 00:33:05.560 00:33:05.560 ' 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.560 --rc genhtml_branch_coverage=1 00:33:05.560 --rc genhtml_function_coverage=1 00:33:05.560 --rc genhtml_legend=1 00:33:05.560 --rc geninfo_all_blocks=1 00:33:05.560 --rc geninfo_unexecuted_blocks=1 00:33:05.560 00:33:05.560 ' 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.560 --rc genhtml_branch_coverage=1 00:33:05.560 --rc genhtml_function_coverage=1 00:33:05.560 --rc genhtml_legend=1 00:33:05.560 --rc geninfo_all_blocks=1 00:33:05.560 --rc geninfo_unexecuted_blocks=1 00:33:05.560 00:33:05.560 ' 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:05.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.560 --rc genhtml_branch_coverage=1 00:33:05.560 --rc genhtml_function_coverage=1 00:33:05.560 --rc genhtml_legend=1 00:33:05.560 --rc geninfo_all_blocks=1 00:33:05.560 --rc geninfo_unexecuted_blocks=1 00:33:05.560 00:33:05.560 ' 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.560 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.819 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.820 19:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:12.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:12.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:12.420 Found net devices under 0000:86:00.0: cvl_0_0 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:12.420 Found net devices under 0000:86:00.1: cvl_0_1 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.420 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:33:12.421 00:33:12.421 --- 10.0.0.2 ping statistics --- 00:33:12.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.421 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:33:12.421 00:33:12.421 --- 10.0.0.1 ping statistics --- 00:33:12.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.421 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3998674 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3998674 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3998674 ']' 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.421 [2024-11-26 19:34:34.601227] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:12.421 [2024-11-26 19:34:34.602132] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:33:12.421 [2024-11-26 19:34:34.602164] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.421 [2024-11-26 19:34:34.679077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:12.421 [2024-11-26 19:34:34.720813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.421 [2024-11-26 19:34:34.720854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.421 [2024-11-26 19:34:34.720861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.421 [2024-11-26 19:34:34.720867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.421 [2024-11-26 19:34:34.720875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.421 [2024-11-26 19:34:34.722283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:12.421 [2024-11-26 19:34:34.722373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:12.421 [2024-11-26 19:34:34.722482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:12.421 [2024-11-26 19:34:34.722483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:12.421 [2024-11-26 19:34:34.790042] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:12.421 [2024-11-26 19:34:34.790633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:12.421 [2024-11-26 19:34:34.791402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:12.421 [2024-11-26 19:34:34.791540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:12.421 [2024-11-26 19:34:34.791684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.421 [2024-11-26 19:34:34.871189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.421 Malloc0 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.421 [2024-11-26 19:34:34.955476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:12.421 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:12.421 { 00:33:12.421 "params": { 00:33:12.421 "name": "Nvme$subsystem", 00:33:12.421 "trtype": "$TEST_TRANSPORT", 00:33:12.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:12.421 "adrfam": "ipv4", 00:33:12.421 "trsvcid": "$NVMF_PORT", 00:33:12.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:12.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:12.422 "hdgst": ${hdgst:-false}, 00:33:12.422 "ddgst": ${ddgst:-false} 00:33:12.422 }, 00:33:12.422 "method": "bdev_nvme_attach_controller" 00:33:12.422 } 00:33:12.422 EOF 00:33:12.422 )") 00:33:12.422 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:12.422 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:12.422 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:12.422 19:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:12.422 "params": { 00:33:12.422 "name": "Nvme1", 00:33:12.422 "trtype": "tcp", 00:33:12.422 "traddr": "10.0.0.2", 00:33:12.422 "adrfam": "ipv4", 00:33:12.422 "trsvcid": "4420", 00:33:12.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:12.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:12.422 "hdgst": false, 00:33:12.422 "ddgst": false 00:33:12.422 }, 00:33:12.422 "method": "bdev_nvme_attach_controller" 00:33:12.422 }' 00:33:12.422 [2024-11-26 19:34:35.005343] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:33:12.422 [2024-11-26 19:34:35.005391] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998700 ] 00:33:12.422 [2024-11-26 19:34:35.084354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:12.422 [2024-11-26 19:34:35.129511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.422 [2024-11-26 19:34:35.129549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.422 [2024-11-26 19:34:35.129550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.422 I/O targets: 00:33:12.422 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:12.422 00:33:12.422 00:33:12.422 CUnit - A unit testing framework for C - Version 2.1-3 00:33:12.422 http://cunit.sourceforge.net/ 00:33:12.422 00:33:12.422 00:33:12.422 Suite: bdevio tests on: Nvme1n1 00:33:12.422 Test: blockdev write read block ...passed 00:33:12.422 Test: blockdev write zeroes read block ...passed 00:33:12.422 Test: blockdev write zeroes read no split ...passed 00:33:12.422 Test: blockdev write zeroes read split ...passed 00:33:12.422 Test: blockdev write zeroes read split partial ...passed 00:33:12.422 Test: blockdev reset ...[2024-11-26 19:34:35.430799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:12.422 [2024-11-26 19:34:35.430863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae1350 (9): Bad file descriptor 00:33:12.745 [2024-11-26 19:34:35.525781] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:12.745 passed 00:33:12.745 Test: blockdev write read 8 blocks ...passed 00:33:12.745 Test: blockdev write read size > 128k ...passed 00:33:12.745 Test: blockdev write read invalid size ...passed 00:33:12.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:12.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:12.745 Test: blockdev write read max offset ...passed 00:33:12.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:12.745 Test: blockdev writev readv 8 blocks ...passed 00:33:12.745 Test: blockdev writev readv 30 x 1block ...passed 00:33:12.745 Test: blockdev writev readv block ...passed 00:33:12.746 Test: blockdev writev readv size > 128k ...passed 00:33:12.746 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:12.746 Test: blockdev comparev and writev ...[2024-11-26 19:34:35.776523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.746 [2024-11-26 19:34:35.776552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.746 [2024-11-26 19:34:35.776565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.746 [2024-11-26 19:34:35.776573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:12.746 [2024-11-26 19:34:35.776873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.746 [2024-11-26 19:34:35.776884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:12.746 [2024-11-26 19:34:35.776895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.746 [2024-11-26 19:34:35.776902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:12.746 [2024-11-26 19:34:35.777191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.746 [2024-11-26 19:34:35.777202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:12.746 [2024-11-26 19:34:35.777213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.746 [2024-11-26 19:34:35.777221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:12.746 [2024-11-26 19:34:35.777501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.746 [2024-11-26 19:34:35.777512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:12.746 [2024-11-26 19:34:35.777524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.746 [2024-11-26 19:34:35.777531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:12.746 passed 00:33:13.030 Test: blockdev nvme passthru rw ...passed 00:33:13.030 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:34:35.859041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.030 [2024-11-26 19:34:35.859057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:13.030 [2024-11-26 19:34:35.859159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.030 [2024-11-26 19:34:35.859170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:13.030 [2024-11-26 19:34:35.859274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.030 [2024-11-26 19:34:35.859288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:13.030 [2024-11-26 19:34:35.859393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.030 [2024-11-26 19:34:35.859403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:13.030 passed 00:33:13.030 Test: blockdev nvme admin passthru ...passed 00:33:13.030 Test: blockdev copy ...passed 00:33:13.030 00:33:13.030 Run Summary: Type Total Ran Passed Failed Inactive 00:33:13.030 suites 1 1 n/a 0 0 00:33:13.030 tests 23 23 23 0 0 00:33:13.030 asserts 152 152 152 0 n/a 00:33:13.030 00:33:13.030 Elapsed time = 1.187 seconds 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.030 rmmod nvme_tcp 00:33:13.030 rmmod nvme_fabrics 00:33:13.030 rmmod nvme_keyring 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3998674 ']' 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3998674 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3998674 ']' 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3998674 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.030 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3998674 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3998674' 00:33:13.328 killing process with pid 3998674 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3998674 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3998674 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.328 19:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.898 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:15.898 00:33:15.898 real 0m9.942s 00:33:15.898 user 0m8.759s 00:33:15.898 sys 0m5.277s 00:33:15.898 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.898 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:15.898 ************************************ 00:33:15.898 END TEST nvmf_bdevio 00:33:15.898 ************************************ 00:33:15.898 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:15.898 00:33:15.898 real 4m32.134s 00:33:15.898 user 9m1.596s 00:33:15.898 sys 1m51.404s 00:33:15.898 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.898 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:15.898 ************************************ 00:33:15.898 END TEST nvmf_target_core_interrupt_mode 00:33:15.898 ************************************ 00:33:15.898 19:34:38 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:15.898 19:34:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:15.898 19:34:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.898 19:34:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:15.898 ************************************ 00:33:15.898 START TEST nvmf_interrupt 00:33:15.898 ************************************ 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:15.898 * Looking for test storage... 00:33:15.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:15.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.898 --rc genhtml_branch_coverage=1 00:33:15.898 --rc genhtml_function_coverage=1 00:33:15.898 --rc genhtml_legend=1 00:33:15.898 --rc geninfo_all_blocks=1 00:33:15.898 --rc geninfo_unexecuted_blocks=1 00:33:15.898 00:33:15.898 ' 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:15.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.898 --rc genhtml_branch_coverage=1 00:33:15.898 --rc genhtml_function_coverage=1 00:33:15.898 --rc genhtml_legend=1 00:33:15.898 --rc geninfo_all_blocks=1 00:33:15.898 --rc geninfo_unexecuted_blocks=1 00:33:15.898 00:33:15.898 ' 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:15.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.898 --rc genhtml_branch_coverage=1 00:33:15.898 --rc genhtml_function_coverage=1 00:33:15.898 --rc genhtml_legend=1 00:33:15.898 --rc geninfo_all_blocks=1 00:33:15.898 --rc geninfo_unexecuted_blocks=1 00:33:15.898 00:33:15.898 ' 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:15.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.898 --rc genhtml_branch_coverage=1 00:33:15.898 --rc genhtml_function_coverage=1 00:33:15.898 --rc genhtml_legend=1 00:33:15.898 --rc geninfo_all_blocks=1 00:33:15.898 --rc geninfo_unexecuted_blocks=1 00:33:15.898 00:33:15.898 ' 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.898 19:34:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:15.899 19:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:22.470 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:22.470 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:22.470 Found net devices under 0000:86:00.0: cvl_0_0 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:22.470 Found net devices under 0000:86:00.1: cvl_0_1 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:22.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:33:22.470 00:33:22.470 --- 10.0.0.2 ping statistics --- 00:33:22.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.470 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:22.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:33:22.470 00:33:22.470 --- 10.0.0.1 ping statistics --- 00:33:22.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.470 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:22.470 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=4002940 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 4002940 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 4002940 ']' 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.471 [2024-11-26 19:34:44.722790] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:22.471 [2024-11-26 19:34:44.723749] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:33:22.471 [2024-11-26 19:34:44.723786] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.471 [2024-11-26 19:34:44.799523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:22.471 [2024-11-26 19:34:44.844238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.471 [2024-11-26 19:34:44.844269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.471 [2024-11-26 19:34:44.844276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.471 [2024-11-26 19:34:44.844282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.471 [2024-11-26 19:34:44.844287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.471 [2024-11-26 19:34:44.845525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.471 [2024-11-26 19:34:44.845527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.471 [2024-11-26 19:34:44.913601] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:22.471 [2024-11-26 19:34:44.914333] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:22.471 [2024-11-26 19:34:44.914506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:22.471 19:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:22.471 5000+0 records in 00:33:22.471 5000+0 records out 00:33:22.471 10240000 bytes (10 MB, 9.8 MiB) copied, 0.017348 s, 590 MB/s 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.471 AIO0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.471 [2024-11-26 19:34:45.050272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.471 [2024-11-26 19:34:45.078492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4002940 0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4002940 0 idle 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4002940 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4002940 -w 256 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4002940 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0' 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4002940 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4002940 1 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4002940 1 idle 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4002940 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4002940 -w 256 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4002950 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4002950 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4003254 00:33:22.471 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4002940 0 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4002940 0 busy 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4002940 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4002940 -w 256 00:33:22.472 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:22.731 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4002940 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.43 reactor_0' 00:33:22.731 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4002940 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.43 reactor_0 00:33:22.731 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4002940 1 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4002940 1 busy 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4002940 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4002940 -w 256 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4002950 root 20 0 128.2g 46848 33792 R 87.5 0.0 0:00.27 reactor_1' 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4002950 root 20 0 128.2g 46848 33792 R 87.5 0.0 0:00.27 reactor_1 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.732 19:34:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4003254 00:33:32.703 Initializing NVMe Controllers 00:33:32.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:32.704 Controller IO queue size 256, less than required. 00:33:32.704 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:32.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:32.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:32.704 Initialization complete. Launching workers. 00:33:32.704 ======================================================== 00:33:32.704 Latency(us) 00:33:32.704 Device Information : IOPS MiB/s Average min max 00:33:32.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16802.40 65.63 15242.78 3561.22 28514.43 00:33:32.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16600.20 64.84 15425.86 7779.31 25834.98 00:33:32.704 ======================================================== 00:33:32.704 Total : 33402.60 130.48 15333.77 3561.22 28514.43 00:33:32.704 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4002940 0 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4002940 0 idle 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4002940 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4002940 -w 256 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4002940 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.22 reactor_0' 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4002940 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.22 reactor_0 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4002940 1 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4002940 1 idle 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4002940 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4002940 -w 256 00:33:32.704 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4002950 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1' 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4002950 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:32.963 19:34:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:33.222 19:34:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:33.222 19:34:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:33.222 19:34:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:33.222 19:34:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:33.222 19:34:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4002940 0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4002940 0 idle 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4002940 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4002940 -w 256 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4002940 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.46 reactor_0' 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4002940 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.46 reactor_0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4002940 1 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4002940 1 idle 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4002940 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4002940 -w 256 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4002950 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4002950 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:35.759 19:34:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:36.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.018 19:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.018 rmmod nvme_tcp 00:33:36.018 rmmod nvme_fabrics 00:33:36.018 rmmod nvme_keyring 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 4002940 ']' 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 4002940 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 4002940 ']' 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 4002940 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4002940 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4002940' 00:33:36.018 killing process with pid 4002940 00:33:36.018 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 4002940 00:33:36.019 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 4002940 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:36.280 19:34:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.817 19:35:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.817 00:33:38.817 real 0m22.821s 00:33:38.817 user 0m39.634s 00:33:38.817 sys 0m8.316s 00:33:38.817 19:35:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.817 19:35:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:38.817 ************************************ 00:33:38.817 END TEST nvmf_interrupt 00:33:38.817 ************************************ 00:33:38.817 00:33:38.817 real 27m27.295s 00:33:38.817 user 56m44.268s 00:33:38.817 sys 9m17.403s 00:33:38.817 19:35:01 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.817 19:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.817 ************************************ 00:33:38.817 END TEST nvmf_tcp 00:33:38.817 ************************************ 00:33:38.817 19:35:01 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:38.817 19:35:01 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:38.817 19:35:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:38.817 19:35:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.817 19:35:01 -- common/autotest_common.sh@10 -- # set +x 00:33:38.817 ************************************ 00:33:38.817 START TEST spdkcli_nvmf_tcp 00:33:38.817 ************************************ 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:38.817 * Looking for test storage... 00:33:38.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:38.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.817 --rc genhtml_branch_coverage=1 00:33:38.817 --rc genhtml_function_coverage=1 00:33:38.817 --rc genhtml_legend=1 00:33:38.817 --rc geninfo_all_blocks=1 00:33:38.817 --rc geninfo_unexecuted_blocks=1 00:33:38.817 00:33:38.817 ' 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:38.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.817 --rc genhtml_branch_coverage=1 00:33:38.817 --rc genhtml_function_coverage=1 00:33:38.817 --rc genhtml_legend=1 00:33:38.817 --rc geninfo_all_blocks=1 00:33:38.817 --rc geninfo_unexecuted_blocks=1 00:33:38.817 00:33:38.817 ' 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:38.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.817 --rc genhtml_branch_coverage=1 00:33:38.817 --rc genhtml_function_coverage=1 00:33:38.817 --rc genhtml_legend=1 00:33:38.817 --rc geninfo_all_blocks=1 00:33:38.817 --rc geninfo_unexecuted_blocks=1 00:33:38.817 00:33:38.817 ' 00:33:38.817 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:38.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.817 --rc genhtml_branch_coverage=1 00:33:38.817 --rc genhtml_function_coverage=1 00:33:38.817 --rc genhtml_legend=1 00:33:38.817 --rc geninfo_all_blocks=1 00:33:38.818 --rc geninfo_unexecuted_blocks=1 00:33:38.818 00:33:38.818 ' 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4008938 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4008938 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 4008938 ']' 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.818 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.818 [2024-11-26 19:35:01.718971] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:33:38.818 [2024-11-26 19:35:01.719020] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008938 ] 00:33:38.818 [2024-11-26 19:35:01.792969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:38.818 [2024-11-26 19:35:01.841848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.818 [2024-11-26 19:35:01.841851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.077 19:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:39.077 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:39.077 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:39.077 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:39.077 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:39.077 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:39.077 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:39.077 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.077 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.077 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:39.077 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:39.077 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:39.077 ' 00:33:41.610 [2024-11-26 19:35:04.681258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.986 [2024-11-26 19:35:06.017694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:45.512 [2024-11-26 19:35:08.505492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:48.042 [2024-11-26 19:35:10.680327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:49.421 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:49.421 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:49.421 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:49.421 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:49.421 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:49.421 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:49.421 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:49.421 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.421 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.421 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:49.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:49.421 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:49.421 19:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:49.421 19:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.421 19:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.421 19:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:49.421 19:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.421 19:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.421 19:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:49.421 19:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.990 19:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:49.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:49.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:49.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:49.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:49.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:49.990 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:49.990 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:49.990 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:49.990 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:49.990 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:49.990 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:49.990 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:49.990 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:49.990 ' 00:33:56.560 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:56.560 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:56.560 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:56.560 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:56.560 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:56.560 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:56.560 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:56.560 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:56.560 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:56.560 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:56.560 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:56.560 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:56.560 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:56.560 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4008938 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4008938 ']' 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4008938 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4008938 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4008938' 00:33:56.560 killing process with pid 4008938 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 4008938 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 4008938 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4008938 ']' 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4008938 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4008938 ']' 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4008938 00:33:56.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4008938) - No such process 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 4008938 is not found' 00:33:56.560 Process with pid 4008938 is not found 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:56.560 00:33:56.560 real 0m17.368s 00:33:56.560 user 0m38.306s 00:33:56.560 sys 0m0.799s 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.560 19:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.560 ************************************ 00:33:56.560 END TEST spdkcli_nvmf_tcp 00:33:56.560 ************************************ 00:33:56.560 19:35:18 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:56.560 19:35:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:56.560 19:35:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.560 19:35:18 -- common/autotest_common.sh@10 -- # set +x 00:33:56.560 ************************************ 00:33:56.560 START TEST nvmf_identify_passthru 00:33:56.560 ************************************ 00:33:56.560 19:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:56.560 * Looking for test storage... 00:33:56.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.560 19:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:56.560 19:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:56.560 19:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:56.560 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:56.560 19:35:19 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:56.561 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.561 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:56.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.561 --rc genhtml_branch_coverage=1 00:33:56.561 --rc genhtml_function_coverage=1 00:33:56.561 --rc genhtml_legend=1 00:33:56.561 --rc geninfo_all_blocks=1 00:33:56.561 --rc geninfo_unexecuted_blocks=1 00:33:56.561 00:33:56.561 ' 00:33:56.561 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:56.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.561 --rc genhtml_branch_coverage=1 00:33:56.561 --rc genhtml_function_coverage=1 00:33:56.561 --rc genhtml_legend=1 00:33:56.561 --rc geninfo_all_blocks=1 00:33:56.561 --rc geninfo_unexecuted_blocks=1 00:33:56.561 00:33:56.561 ' 00:33:56.561 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:56.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.561 --rc genhtml_branch_coverage=1 00:33:56.561 --rc genhtml_function_coverage=1 00:33:56.561 --rc genhtml_legend=1 00:33:56.561 --rc geninfo_all_blocks=1 00:33:56.561 --rc geninfo_unexecuted_blocks=1 00:33:56.561 00:33:56.561 ' 00:33:56.561 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:56.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.561 --rc genhtml_branch_coverage=1 00:33:56.561 --rc genhtml_function_coverage=1 00:33:56.561 --rc genhtml_legend=1 00:33:56.561 --rc geninfo_all_blocks=1 00:33:56.561 --rc geninfo_unexecuted_blocks=1 00:33:56.561 00:33:56.561 ' 00:33:56.561 19:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:56.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.561 19:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.561 19:35:19 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:56.561 19:35:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.561 19:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.561 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:56.561 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.561 19:35:19 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.561 19:35:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.838 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:01.839 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:01.839 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:01.839 Found net devices under 0000:86:00.0: cvl_0_0 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:01.839 Found net devices under 0000:86:00.1: cvl_0_1 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:01.839 19:35:24 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:02.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:02.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:34:02.099 00:34:02.099 --- 10.0.0.2 ping statistics --- 00:34:02.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.099 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:02.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:02.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:34:02.099 00:34:02.099 --- 10.0.0.1 ping statistics --- 00:34:02.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.099 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:02.099 19:35:25 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:02.099 19:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:02.099 19:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:02.099 19:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:02.099 19:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:02.099 19:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:02.099 19:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:02.099 19:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:02.099 19:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:07.370 19:35:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:34:07.370 19:35:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:07.370 19:35:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:07.370 19:35:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:11.559 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:11.559 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:11.559 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.559 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.559 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:11.559 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.559 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.818 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4022028 00:34:11.818 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:11.818 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:11.818 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4022028 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 4022028 ']' 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.818 [2024-11-26 19:35:34.720734] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:34:11.818 [2024-11-26 19:35:34.720774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.818 [2024-11-26 19:35:34.802764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:11.818 [2024-11-26 19:35:34.849729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.818 [2024-11-26 19:35:34.849765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.818 [2024-11-26 19:35:34.849772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.818 [2024-11-26 19:35:34.849779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.818 [2024-11-26 19:35:34.849784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.818 [2024-11-26 19:35:34.851350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.818 [2024-11-26 19:35:34.851497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.818 [2024-11-26 19:35:34.851603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.818 [2024-11-26 19:35:34.851604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:11.818 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.818 INFO: Log level set to 20 00:34:11.818 INFO: Requests: 00:34:11.818 { 00:34:11.818 "jsonrpc": "2.0", 00:34:11.818 "method": "nvmf_set_config", 00:34:11.818 "id": 1, 00:34:11.818 "params": { 00:34:11.818 "admin_cmd_passthru": { 00:34:11.818 "identify_ctrlr": true 00:34:11.818 } 00:34:11.818 } 00:34:11.818 } 00:34:11.818 00:34:11.818 INFO: response: 00:34:11.818 { 00:34:11.818 "jsonrpc": "2.0", 00:34:11.818 "id": 1, 00:34:11.818 "result": true 00:34:11.818 } 00:34:11.818 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.818 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.818 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.818 INFO: Setting log level to 20 00:34:11.818 INFO: Setting log level to 20 00:34:11.818 INFO: Log level set to 20 00:34:11.818 INFO: Log level set to 20 00:34:11.818 INFO: Requests: 00:34:11.818 { 00:34:11.818 "jsonrpc": "2.0", 00:34:11.818 "method": "framework_start_init", 00:34:11.818 "id": 1 00:34:11.818 } 00:34:11.818 00:34:11.818 INFO: Requests: 00:34:11.818 { 00:34:11.818 "jsonrpc": "2.0", 00:34:11.818 "method": "framework_start_init", 00:34:11.818 "id": 1 00:34:11.818 } 00:34:11.818 00:34:12.078 [2024-11-26 19:35:34.959858] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:12.078 INFO: response: 00:34:12.078 { 00:34:12.078 "jsonrpc": "2.0", 00:34:12.078 "id": 1, 00:34:12.078 "result": true 00:34:12.078 } 00:34:12.078 00:34:12.078 INFO: response: 00:34:12.078 { 00:34:12.078 "jsonrpc": "2.0", 00:34:12.078 "id": 1, 00:34:12.078 "result": true 00:34:12.078 } 00:34:12.078 00:34:12.078 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.078 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:12.078 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.078 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.078 INFO: Setting log level to 40 00:34:12.078 INFO: Setting log level to 40 00:34:12.078 INFO: Setting log level to 40 00:34:12.078 [2024-11-26 19:35:34.973209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.078 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.078 19:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:12.078 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.078 19:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.078 19:35:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:12.078 19:35:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.078 19:35:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.359 Nvme0n1 00:34:15.359 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.360 19:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.360 19:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.360 19:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.360 [2024-11-26 19:35:37.879047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.360 19:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.360 [ 00:34:15.360 { 00:34:15.360 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:15.360 "subtype": "Discovery", 00:34:15.360 "listen_addresses": [], 00:34:15.360 "allow_any_host": true, 00:34:15.360 "hosts": [] 00:34:15.360 }, 00:34:15.360 { 00:34:15.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.360 "subtype": "NVMe", 00:34:15.360 "listen_addresses": [ 00:34:15.360 { 00:34:15.360 "trtype": "TCP", 00:34:15.360 "adrfam": "IPv4", 00:34:15.360 "traddr": "10.0.0.2", 00:34:15.360 "trsvcid": "4420" 00:34:15.360 } 00:34:15.360 ], 00:34:15.360 "allow_any_host": true, 00:34:15.360 "hosts": [], 00:34:15.360 "serial_number": "SPDK00000000000001", 00:34:15.360 "model_number": "SPDK bdev Controller", 00:34:15.360 "max_namespaces": 1, 00:34:15.360 "min_cntlid": 1, 00:34:15.360 "max_cntlid": 65519, 00:34:15.360 "namespaces": [ 00:34:15.360 { 00:34:15.360 "nsid": 1, 00:34:15.360 "bdev_name": "Nvme0n1", 00:34:15.360 "name": "Nvme0n1", 00:34:15.360 "nguid": "F14286B19279461FB0B9BCE3F150920F", 00:34:15.360 "uuid": "f14286b1-9279-461f-b0b9-bce3f150920f" 00:34:15.360 } 00:34:15.360 ] 00:34:15.360 } 00:34:15.360 ] 00:34:15.360 19:35:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.360 19:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:15.360 19:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:15.360 19:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:15.360 19:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:15.360 rmmod nvme_tcp 00:34:15.360 rmmod nvme_fabrics 00:34:15.360 rmmod nvme_keyring 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 4022028 ']' 00:34:15.360 19:35:38 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 4022028 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 4022028 ']' 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 4022028 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4022028 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4022028' 00:34:15.360 killing process with pid 4022028 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 4022028 00:34:15.360 19:35:38 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 4022028 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.889 19:35:40 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.889 19:35:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:17.889 19:35:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.796 19:35:42 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.796 00:34:19.796 real 0m23.599s 00:34:19.796 user 0m29.889s 00:34:19.796 sys 0m6.424s 00:34:19.796 19:35:42 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.796 19:35:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.796 ************************************ 00:34:19.796 END TEST nvmf_identify_passthru 00:34:19.796 ************************************ 00:34:19.796 19:35:42 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:19.796 19:35:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:19.796 19:35:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.796 19:35:42 -- common/autotest_common.sh@10 -- # set +x 00:34:19.796 ************************************ 00:34:19.796 START TEST nvmf_dif 00:34:19.796 ************************************ 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:19.796 * Looking for test storage... 00:34:19.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:19.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.796 --rc genhtml_branch_coverage=1 00:34:19.796 --rc genhtml_function_coverage=1 00:34:19.796 --rc genhtml_legend=1 00:34:19.796 --rc geninfo_all_blocks=1 00:34:19.796 --rc geninfo_unexecuted_blocks=1 00:34:19.796 00:34:19.796 ' 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:19.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.796 --rc genhtml_branch_coverage=1 00:34:19.796 --rc genhtml_function_coverage=1 00:34:19.796 --rc genhtml_legend=1 00:34:19.796 --rc geninfo_all_blocks=1 00:34:19.796 --rc geninfo_unexecuted_blocks=1 00:34:19.796 00:34:19.796 ' 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:19.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.796 --rc genhtml_branch_coverage=1 00:34:19.796 --rc genhtml_function_coverage=1 00:34:19.796 --rc genhtml_legend=1 00:34:19.796 --rc geninfo_all_blocks=1 00:34:19.796 --rc geninfo_unexecuted_blocks=1 00:34:19.796 00:34:19.796 ' 00:34:19.796 19:35:42 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:19.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.796 --rc genhtml_branch_coverage=1 00:34:19.796 --rc genhtml_function_coverage=1 00:34:19.796 --rc genhtml_legend=1 00:34:19.796 --rc geninfo_all_blocks=1 00:34:19.796 --rc geninfo_unexecuted_blocks=1 00:34:19.796 00:34:19.796 ' 00:34:19.796 19:35:42 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.796 19:35:42 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.796 19:35:42 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.796 19:35:42 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.797 19:35:42 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.797 19:35:42 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.797 19:35:42 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:19.797 19:35:42 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.797 19:35:42 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:19.797 19:35:42 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:19.797 19:35:42 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:19.797 19:35:42 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:19.797 19:35:42 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.797 19:35:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:19.797 19:35:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.797 19:35:42 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.797 19:35:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:26.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:26.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:26.371 Found net devices under 0000:86:00.0: cvl_0_0 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:26.371 Found net devices under 0000:86:00.1: cvl_0_1 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.371 19:35:48 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:34:26.372 00:34:26.372 --- 10.0.0.2 ping statistics --- 00:34:26.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.372 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:34:26.372 00:34:26.372 --- 10.0.0.1 ping statistics --- 00:34:26.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.372 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:26.372 19:35:48 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:28.271 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:28.272 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:28.272 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:28.530 19:35:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:28.530 19:35:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:28.530 19:35:51 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:28.530 19:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=4029251 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:28.530 19:35:51 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 4029251 00:34:28.530 19:35:51 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 4029251 ']' 00:34:28.530 19:35:51 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.530 19:35:51 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.530 19:35:51 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.530 19:35:51 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.530 19:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.530 [2024-11-26 19:35:51.615337] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:34:28.530 [2024-11-26 19:35:51.615380] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.789 [2024-11-26 19:35:51.693454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.789 [2024-11-26 19:35:51.733855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.789 [2024-11-26 19:35:51.733889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.789 [2024-11-26 19:35:51.733896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.789 [2024-11-26 19:35:51.733902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.789 [2024-11-26 19:35:51.733907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.789 [2024-11-26 19:35:51.734464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:28.789 19:35:51 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.789 19:35:51 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.789 19:35:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:28.789 19:35:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.789 [2024-11-26 19:35:51.870665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.789 19:35:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:28.789 19:35:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.049 ************************************ 00:34:29.049 START TEST fio_dif_1_default 00:34:29.049 ************************************ 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.049 bdev_null0 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.049 [2024-11-26 19:35:51.942977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:29.049 { 00:34:29.049 "params": { 00:34:29.049 "name": "Nvme$subsystem", 00:34:29.049 "trtype": "$TEST_TRANSPORT", 00:34:29.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.049 "adrfam": "ipv4", 00:34:29.049 "trsvcid": "$NVMF_PORT", 00:34:29.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.049 "hdgst": ${hdgst:-false}, 00:34:29.049 "ddgst": ${ddgst:-false} 00:34:29.049 }, 00:34:29.049 "method": "bdev_nvme_attach_controller" 00:34:29.049 } 00:34:29.049 EOF 00:34:29.049 )") 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:29.049 "params": { 00:34:29.049 "name": "Nvme0", 00:34:29.049 "trtype": "tcp", 00:34:29.049 "traddr": "10.0.0.2", 00:34:29.049 "adrfam": "ipv4", 00:34:29.049 "trsvcid": "4420", 00:34:29.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.049 "hdgst": false, 00:34:29.049 "ddgst": false 00:34:29.049 }, 00:34:29.049 "method": "bdev_nvme_attach_controller" 00:34:29.049 }' 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:29.049 19:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:29.049 19:35:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:29.049 19:35:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:29.049 19:35:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.049 19:35:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.308 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.308 fio-3.35 00:34:29.308 Starting 1 thread 00:34:41.522 00:34:41.522 filename0: (groupid=0, jobs=1): err= 0: pid=4029636: Tue Nov 26 19:36:02 2024 00:34:41.522 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10014msec) 00:34:41.522 slat (nsec): min=5975, max=33313, avg=6553.54, stdev=1652.64 00:34:41.522 clat (usec): min=40831, max=43712, avg=41360.60, stdev=504.23 00:34:41.522 lat (usec): min=40837, max=43745, avg=41367.16, stdev=504.36 00:34:41.522 clat percentiles (usec): 00:34:41.522 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:41.522 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:41.522 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:41.522 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:41.522 | 99.99th=[43779] 00:34:41.523 bw ( KiB/s): min= 384, max= 416, per=99.57%, avg=385.60, stdev= 7.16, samples=20 00:34:41.523 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:34:41.523 lat (msec) : 50=100.00% 00:34:41.523 cpu : usr=92.56%, sys=7.15%, ctx=8, majf=0, minf=0 00:34:41.523 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.523 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.523 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:41.523 00:34:41.523 Run status group 0 (all jobs): 00:34:41.523 READ: bw=387KiB/s (396kB/s), 387KiB/s-387KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10014-10014msec 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 00:34:41.523 real 0m11.114s 00:34:41.523 user 0m16.177s 00:34:41.523 sys 0m1.020s 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 ************************************ 00:34:41.523 END TEST fio_dif_1_default 00:34:41.523 ************************************ 00:34:41.523 19:36:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:41.523 19:36:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:41.523 19:36:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 ************************************ 00:34:41.523 START TEST fio_dif_1_multi_subsystems 00:34:41.523 ************************************ 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 bdev_null0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 [2024-11-26 19:36:03.134043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 bdev_null1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:41.523 { 00:34:41.523 "params": { 00:34:41.523 "name": "Nvme$subsystem", 00:34:41.523 "trtype": "$TEST_TRANSPORT", 00:34:41.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.523 "adrfam": "ipv4", 00:34:41.523 "trsvcid": "$NVMF_PORT", 00:34:41.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.523 "hdgst": ${hdgst:-false}, 00:34:41.523 "ddgst": ${ddgst:-false} 00:34:41.523 }, 00:34:41.523 "method": "bdev_nvme_attach_controller" 00:34:41.523 } 00:34:41.523 EOF 00:34:41.523 )") 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:41.523 { 00:34:41.523 "params": { 00:34:41.523 "name": "Nvme$subsystem", 00:34:41.523 "trtype": "$TEST_TRANSPORT", 00:34:41.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.523 "adrfam": "ipv4", 00:34:41.523 "trsvcid": "$NVMF_PORT", 00:34:41.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.523 "hdgst": ${hdgst:-false}, 00:34:41.523 "ddgst": ${ddgst:-false} 00:34:41.523 }, 00:34:41.523 "method": "bdev_nvme_attach_controller" 00:34:41.523 } 00:34:41.523 EOF 00:34:41.523 )") 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:41.523 "params": { 00:34:41.523 "name": "Nvme0", 00:34:41.523 "trtype": "tcp", 00:34:41.523 "traddr": "10.0.0.2", 00:34:41.523 "adrfam": "ipv4", 00:34:41.523 "trsvcid": "4420", 00:34:41.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.523 "hdgst": false, 00:34:41.523 "ddgst": false 00:34:41.523 }, 00:34:41.523 "method": "bdev_nvme_attach_controller" 00:34:41.523 },{ 00:34:41.523 "params": { 00:34:41.523 "name": "Nvme1", 00:34:41.523 "trtype": "tcp", 00:34:41.523 "traddr": "10.0.0.2", 00:34:41.523 "adrfam": "ipv4", 00:34:41.523 "trsvcid": "4420", 00:34:41.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.523 "hdgst": false, 00:34:41.523 "ddgst": false 00:34:41.523 }, 00:34:41.523 "method": "bdev_nvme_attach_controller" 00:34:41.523 }' 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.523 19:36:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.523 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:41.523 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:41.523 fio-3.35 00:34:41.523 Starting 2 threads 00:34:51.500 00:34:51.500 filename0: (groupid=0, jobs=1): err= 0: pid=4032427: Tue Nov 26 19:36:14 2024 00:34:51.500 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10041msec) 00:34:51.500 slat (nsec): min=5861, max=65941, avg=8256.88, stdev=4053.33 00:34:51.500 clat (usec): min=40738, max=42297, avg=41463.76, stdev=499.35 00:34:51.500 lat (usec): min=40761, max=42319, avg=41472.02, stdev=499.16 00:34:51.500 clat percentiles (usec): 00:34:51.500 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:51.500 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:34:51.500 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:51.500 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:51.500 | 99.99th=[42206] 00:34:51.500 bw ( KiB/s): min= 352, max= 416, per=33.23%, avg=385.60, stdev=12.61, samples=20 00:34:51.500 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:34:51.500 lat (msec) : 50=100.00% 00:34:51.500 cpu : usr=96.45%, sys=3.18%, ctx=47, majf=0, minf=166 00:34:51.500 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.500 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.500 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:51.500 filename1: (groupid=0, jobs=1): err= 0: pid=4032428: Tue Nov 26 19:36:14 2024 00:34:51.500 read: IOPS=193, BW=773KiB/s (792kB/s)(7760KiB/10034msec) 00:34:51.500 slat (nsec): min=5850, max=55216, avg=7660.61, stdev=3395.37 00:34:51.500 clat (usec): min=361, max=42531, avg=20664.65, stdev=20456.91 00:34:51.500 lat (usec): min=368, max=42539, avg=20672.31, stdev=20456.03 00:34:51.500 clat percentiles (usec): 00:34:51.500 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 412], 00:34:51.500 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 545], 60.00th=[40633], 00:34:51.500 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:51.500 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:51.500 | 99.99th=[42730] 00:34:51.500 bw ( KiB/s): min= 704, max= 896, per=66.81%, avg=774.40, stdev=45.96, samples=20 00:34:51.500 iops : min= 176, max= 224, avg=193.60, stdev=11.49, samples=20 00:34:51.500 lat (usec) : 500=49.64%, 750=0.46%, 1000=0.21% 00:34:51.500 lat (msec) : 2=0.21%, 50=49.48% 00:34:51.500 cpu : usr=96.76%, sys=2.92%, ctx=25, majf=0, minf=145 00:34:51.500 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.500 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.500 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:51.500 00:34:51.500 Run status group 0 (all jobs): 00:34:51.500 READ: bw=1158KiB/s (1186kB/s), 386KiB/s-773KiB/s (395kB/s-792kB/s), io=11.4MiB (11.9MB), run=10034-10041msec 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.500 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.501 00:34:51.501 real 0m11.371s 00:34:51.501 user 0m26.203s 00:34:51.501 sys 0m1.001s 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.501 19:36:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.501 ************************************ 00:34:51.501 END TEST fio_dif_1_multi_subsystems 00:34:51.501 ************************************ 00:34:51.501 19:36:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:51.501 19:36:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:51.501 19:36:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.501 19:36:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.501 ************************************ 00:34:51.501 START TEST fio_dif_rand_params 00:34:51.501 ************************************ 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.501 bdev_null0 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.501 [2024-11-26 19:36:14.583450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:51.501 { 00:34:51.501 "params": { 00:34:51.501 "name": "Nvme$subsystem", 00:34:51.501 "trtype": "$TEST_TRANSPORT", 00:34:51.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.501 "adrfam": "ipv4", 00:34:51.501 "trsvcid": "$NVMF_PORT", 00:34:51.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.501 "hdgst": ${hdgst:-false}, 00:34:51.501 "ddgst": ${ddgst:-false} 00:34:51.501 }, 00:34:51.501 "method": "bdev_nvme_attach_controller" 00:34:51.501 } 00:34:51.501 EOF 00:34:51.501 )") 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:51.501 19:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:51.501 "params": { 00:34:51.501 "name": "Nvme0", 00:34:51.501 "trtype": "tcp", 00:34:51.501 "traddr": "10.0.0.2", 00:34:51.501 "adrfam": "ipv4", 00:34:51.501 "trsvcid": "4420", 00:34:51.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:51.501 "hdgst": false, 00:34:51.501 "ddgst": false 00:34:51.501 }, 00:34:51.501 "method": "bdev_nvme_attach_controller" 00:34:51.501 }' 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:51.788 19:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.051 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:52.051 ... 00:34:52.051 fio-3.35 00:34:52.051 Starting 3 threads 00:34:58.624 00:34:58.624 filename0: (groupid=0, jobs=1): err= 0: pid=4036228: Tue Nov 26 19:36:20 2024 00:34:58.624 read: IOPS=310, BW=38.9MiB/s (40.7MB/s)(196MiB/5047msec) 00:34:58.624 slat (nsec): min=6155, max=28134, avg=10963.76, stdev=2143.86 00:34:58.624 clat (usec): min=4400, max=87166, avg=9609.96, stdev=5872.44 00:34:58.624 lat (usec): min=4410, max=87179, avg=9620.92, stdev=5872.40 00:34:58.624 clat percentiles (usec): 00:34:58.625 | 1.00th=[ 5800], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 7963], 00:34:58.625 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:34:58.625 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:34:58.625 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51119], 99.95th=[87557], 00:34:58.625 | 99.99th=[87557] 00:34:58.625 bw ( KiB/s): min=34048, max=45056, per=34.07%, avg=40115.20, stdev=3646.54, samples=10 00:34:58.625 iops : min= 266, max= 352, avg=313.40, stdev=28.49, samples=10 00:34:58.625 lat (msec) : 10=84.51%, 20=13.51%, 50=1.72%, 100=0.25% 00:34:58.625 cpu : usr=94.29%, sys=5.41%, ctx=12, majf=0, minf=58 00:34:58.625 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.625 issued rwts: total=1569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.625 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.625 filename0: (groupid=0, jobs=1): err= 0: pid=4036229: Tue Nov 26 19:36:20 2024 00:34:58.625 read: IOPS=306, BW=38.3MiB/s (40.2MB/s)(192MiB/5004msec) 00:34:58.625 slat (nsec): min=6140, max=25690, avg=11345.66, stdev=1990.81 00:34:58.625 clat (usec): min=3207, max=48175, avg=9764.31, stdev=3048.10 00:34:58.625 lat (usec): min=3213, max=48183, avg=9775.65, stdev=3048.62 00:34:58.625 clat percentiles (usec): 00:34:58.625 | 1.00th=[ 3654], 5.00th=[ 4686], 10.00th=[ 6587], 20.00th=[ 8455], 00:34:58.625 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:34:58.625 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[12256], 00:34:58.625 | 99.00th=[13042], 99.50th=[13566], 99.90th=[46924], 99.95th=[47973], 00:34:58.625 | 99.99th=[47973] 00:34:58.625 bw ( KiB/s): min=33536, max=53248, per=33.61%, avg=39566.22, stdev=5743.07, samples=9 00:34:58.625 iops : min= 262, max= 416, avg=309.11, stdev=44.87, samples=9 00:34:58.625 lat (msec) : 4=2.80%, 10=46.97%, 20=49.84%, 50=0.39% 00:34:58.625 cpu : usr=94.34%, sys=5.34%, ctx=8, majf=0, minf=24 00:34:58.625 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.625 issued rwts: total=1535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.625 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.625 filename0: (groupid=0, jobs=1): err= 0: pid=4036230: Tue Nov 26 19:36:20 2024 00:34:58.625 read: IOPS=304, BW=38.1MiB/s (40.0MB/s)(192MiB/5043msec) 00:34:58.625 slat (nsec): min=6093, max=47172, avg=11009.67, stdev=2166.35 00:34:58.625 clat (usec): min=3628, max=51531, avg=9796.62, stdev=5290.29 00:34:58.625 lat (usec): min=3635, max=51538, avg=9807.63, stdev=5290.32 00:34:58.625 clat percentiles (usec): 00:34:58.625 | 1.00th=[ 4293], 5.00th=[ 6325], 10.00th=[ 7504], 20.00th=[ 8160], 00:34:58.625 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:34:58.625 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:34:58.625 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50594], 99.95th=[51643], 00:34:58.625 | 99.99th=[51643] 00:34:58.625 bw ( KiB/s): min=29184, max=42496, per=33.40%, avg=39321.60, stdev=3897.78, samples=10 00:34:58.625 iops : min= 228, max= 332, avg=307.20, stdev=30.45, samples=10 00:34:58.625 lat (msec) : 4=0.59%, 10=71.13%, 20=26.59%, 50=1.43%, 100=0.26% 00:34:58.625 cpu : usr=94.17%, sys=5.53%, ctx=12, majf=0, minf=90 00:34:58.625 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.625 issued rwts: total=1538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.625 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.625 00:34:58.625 Run status group 0 (all jobs): 00:34:58.625 READ: bw=115MiB/s (121MB/s), 38.1MiB/s-38.9MiB/s (40.0MB/s-40.7MB/s), io=580MiB (608MB), run=5004-5047msec 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.625 bdev_null0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.625 [2024-11-26 19:36:20.871734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.625 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.626 bdev_null1 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.626 bdev_null2 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.626 { 00:34:58.626 "params": { 00:34:58.626 "name": "Nvme$subsystem", 00:34:58.626 "trtype": "$TEST_TRANSPORT", 00:34:58.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.626 "adrfam": "ipv4", 00:34:58.626 "trsvcid": "$NVMF_PORT", 00:34:58.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.626 "hdgst": ${hdgst:-false}, 00:34:58.626 "ddgst": ${ddgst:-false} 00:34:58.626 }, 00:34:58.626 "method": "bdev_nvme_attach_controller" 00:34:58.626 } 00:34:58.626 EOF 00:34:58.626 )") 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.626 { 00:34:58.626 "params": { 00:34:58.626 "name": "Nvme$subsystem", 00:34:58.626 "trtype": "$TEST_TRANSPORT", 00:34:58.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.626 "adrfam": "ipv4", 00:34:58.626 "trsvcid": "$NVMF_PORT", 00:34:58.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.626 "hdgst": ${hdgst:-false}, 00:34:58.626 "ddgst": ${ddgst:-false} 00:34:58.626 }, 00:34:58.626 "method": "bdev_nvme_attach_controller" 00:34:58.626 } 00:34:58.626 EOF 00:34:58.626 )") 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.626 { 00:34:58.626 "params": { 00:34:58.626 "name": "Nvme$subsystem", 00:34:58.626 "trtype": "$TEST_TRANSPORT", 00:34:58.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.626 "adrfam": "ipv4", 00:34:58.626 "trsvcid": "$NVMF_PORT", 00:34:58.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.626 "hdgst": ${hdgst:-false}, 00:34:58.626 "ddgst": ${ddgst:-false} 00:34:58.626 }, 00:34:58.626 "method": "bdev_nvme_attach_controller" 00:34:58.626 } 00:34:58.626 EOF 00:34:58.626 )") 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:58.626 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:58.627 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:58.627 19:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:58.627 "params": { 00:34:58.627 "name": "Nvme0", 00:34:58.627 "trtype": "tcp", 00:34:58.627 "traddr": "10.0.0.2", 00:34:58.627 "adrfam": "ipv4", 00:34:58.627 "trsvcid": "4420", 00:34:58.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.627 "hdgst": false, 00:34:58.627 "ddgst": false 00:34:58.627 }, 00:34:58.627 "method": "bdev_nvme_attach_controller" 00:34:58.627 },{ 00:34:58.627 "params": { 00:34:58.627 "name": "Nvme1", 00:34:58.627 "trtype": "tcp", 00:34:58.627 "traddr": "10.0.0.2", 00:34:58.627 "adrfam": "ipv4", 00:34:58.627 "trsvcid": "4420", 00:34:58.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.627 "hdgst": false, 00:34:58.627 "ddgst": false 00:34:58.627 }, 00:34:58.627 "method": "bdev_nvme_attach_controller" 00:34:58.627 },{ 00:34:58.627 "params": { 00:34:58.627 "name": "Nvme2", 00:34:58.627 "trtype": "tcp", 00:34:58.627 "traddr": "10.0.0.2", 00:34:58.627 "adrfam": "ipv4", 00:34:58.627 "trsvcid": "4420", 00:34:58.627 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:58.627 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:58.627 "hdgst": false, 00:34:58.627 "ddgst": false 00:34:58.627 }, 00:34:58.627 "method": "bdev_nvme_attach_controller" 00:34:58.627 }' 00:34:58.627 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.627 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.627 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.627 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.627 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:58.627 19:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.627 19:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.627 19:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.627 19:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.627 19:36:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.627 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.627 ... 00:34:58.627 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.627 ... 00:34:58.627 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.627 ... 00:34:58.627 fio-3.35 00:34:58.627 Starting 24 threads 00:35:10.824 00:35:10.824 filename0: (groupid=0, jobs=1): err= 0: pid=4037698: Tue Nov 26 19:36:32 2024 00:35:10.824 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:35:10.824 slat (nsec): min=8077, max=90722, avg=23942.99, stdev=15576.11 00:35:10.824 clat (usec): min=22836, max=39375, avg=30393.55, stdev=715.45 00:35:10.824 lat (usec): min=22862, max=39403, avg=30417.50, stdev=711.84 00:35:10.824 clat percentiles (usec): 00:35:10.824 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:10.824 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:10.824 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:10.824 | 99.00th=[31327], 99.50th=[33162], 99.90th=[36439], 99.95th=[36439], 00:35:10.824 | 99.99th=[39584] 00:35:10.824 bw ( KiB/s): min= 2048, max= 2176, per=4.12%, avg=2088.16, stdev=60.74, samples=19 00:35:10.824 iops : min= 512, max= 544, avg=522.00, stdev=15.13, samples=19 00:35:10.824 lat (msec) : 50=100.00% 00:35:10.824 cpu : usr=98.41%, sys=1.19%, ctx=53, majf=0, minf=9 00:35:10.824 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.824 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.824 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.824 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.824 filename0: (groupid=0, jobs=1): err= 0: pid=4037699: Tue Nov 26 19:36:32 2024 00:35:10.824 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10004msec) 00:35:10.824 slat (usec): min=7, max=103, avg=46.86, stdev=19.49 00:35:10.824 clat (usec): min=15844, max=53128, avg=30166.23, stdev=1618.93 00:35:10.824 lat (usec): min=15858, max=53178, avg=30213.09, stdev=1618.92 00:35:10.824 clat percentiles (usec): 00:35:10.824 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:35:10.824 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:10.824 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:10.824 | 99.00th=[31327], 99.50th=[31851], 99.90th=[53216], 99.95th=[53216], 00:35:10.824 | 99.99th=[53216] 00:35:10.824 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2088.16, stdev=74.71, samples=19 00:35:10.824 iops : min= 480, max= 544, avg=522.00, stdev=18.70, samples=19 00:35:10.824 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:35:10.824 cpu : usr=98.56%, sys=1.05%, ctx=14, majf=0, minf=9 00:35:10.824 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.825 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.825 filename0: (groupid=0, jobs=1): err= 0: pid=4037700: Tue Nov 26 19:36:32 2024 00:35:10.825 read: IOPS=523, BW=2096KiB/s (2146kB/s)(20.5MiB/10016msec) 00:35:10.825 slat (nsec): min=5814, max=88334, avg=42587.42, stdev=19210.59 00:35:10.825 clat (usec): min=15830, max=34468, avg=30179.98, stdev=1040.98 00:35:10.825 lat (usec): min=15845, max=34484, avg=30222.57, stdev=1040.03 00:35:10.825 clat percentiles (usec): 00:35:10.825 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:35:10.825 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:10.825 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:10.825 | 99.00th=[31327], 99.50th=[31851], 99.90th=[34341], 99.95th=[34341], 00:35:10.825 | 99.99th=[34341] 00:35:10.825 bw ( KiB/s): min= 2048, max= 2176, per=4.12%, avg=2088.16, stdev=60.74, samples=19 00:35:10.825 iops : min= 512, max= 544, avg=522.00, stdev=15.13, samples=19 00:35:10.825 lat (msec) : 20=0.30%, 50=99.70% 00:35:10.825 cpu : usr=98.66%, sys=0.95%, ctx=15, majf=0, minf=9 00:35:10.825 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.825 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.825 filename0: (groupid=0, jobs=1): err= 0: pid=4037701: Tue Nov 26 19:36:32 2024 00:35:10.825 read: IOPS=534, BW=2139KiB/s (2190kB/s)(20.9MiB/10023msec) 00:35:10.825 slat (nsec): min=6534, max=61807, avg=11786.75, stdev=5230.37 00:35:10.825 clat (usec): min=2518, max=41894, avg=29815.33, stdev=4026.01 00:35:10.825 lat (usec): min=2528, max=41921, avg=29827.11, stdev=4025.54 00:35:10.825 clat percentiles (usec): 00:35:10.825 | 1.00th=[ 3425], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:10.825 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:10.825 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:10.825 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32637], 99.95th=[32637], 00:35:10.825 | 99.99th=[41681] 00:35:10.825 bw ( KiB/s): min= 2048, max= 2944, per=4.22%, avg=2137.80, stdev=199.50, samples=20 00:35:10.825 iops : min= 512, max= 736, avg=534.45, stdev=49.88, samples=20 00:35:10.825 lat (msec) : 4=1.49%, 10=0.34%, 20=0.93%, 50=97.24% 00:35:10.825 cpu : usr=98.16%, sys=1.44%, ctx=16, majf=0, minf=9 00:35:10.825 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:10.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.825 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.825 filename0: (groupid=0, jobs=1): err= 0: pid=4037702: Tue Nov 26 19:36:32 2024 00:35:10.825 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:35:10.825 slat (nsec): min=7680, max=87294, avg=37811.96, stdev=19245.22 00:35:10.825 clat (usec): min=11563, max=32403, avg=30135.78, stdev=1744.48 00:35:10.825 lat (usec): min=11585, max=32435, avg=30173.59, stdev=1744.84 00:35:10.825 clat percentiles (usec): 00:35:10.825 | 1.00th=[20579], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:10.825 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.825 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:10.825 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32375], 99.95th=[32375], 00:35:10.825 | 99.99th=[32375] 00:35:10.825 bw ( KiB/s): min= 2043, max= 2304, per=4.15%, avg=2101.63, stdev=77.89, samples=19 00:35:10.825 iops : min= 510, max= 576, avg=525.37, stdev=19.51, samples=19 00:35:10.825 lat (msec) : 20=0.91%, 50=99.09% 00:35:10.825 cpu : usr=98.31%, sys=1.31%, ctx=13, majf=0, minf=9 00:35:10.825 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.825 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.825 filename0: (groupid=0, jobs=1): err= 0: pid=4037703: Tue Nov 26 19:36:32 2024 00:35:10.825 read: IOPS=523, BW=2096KiB/s (2146kB/s)(20.5MiB/10017msec) 00:35:10.825 slat (nsec): min=6441, max=51626, avg=26155.76, stdev=7929.89 00:35:10.825 clat (usec): min=17533, max=33305, avg=30309.59, stdev=905.15 00:35:10.825 lat (usec): min=17563, max=33324, avg=30335.74, stdev=905.02 00:35:10.825 clat percentiles (usec): 00:35:10.825 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:10.825 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.825 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:10.825 | 99.00th=[31327], 99.50th=[33162], 99.90th=[33162], 99.95th=[33162], 00:35:10.825 | 99.99th=[33424] 00:35:10.825 bw ( KiB/s): min= 2048, max= 2176, per=4.12%, avg=2088.42, stdev=61.13, samples=19 00:35:10.825 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:35:10.825 lat (msec) : 20=0.30%, 50=99.70% 00:35:10.825 cpu : usr=98.51%, sys=1.10%, ctx=16, majf=0, minf=9 00:35:10.825 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.825 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.825 filename0: (groupid=0, jobs=1): err= 0: pid=4037704: Tue Nov 26 19:36:32 2024 00:35:10.825 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:35:10.825 slat (nsec): min=7564, max=74290, avg=19860.34, stdev=7972.26 00:35:10.825 clat (usec): min=11567, max=32564, avg=30249.62, stdev=1755.59 00:35:10.825 lat (usec): min=11593, max=32577, avg=30269.48, stdev=1755.00 00:35:10.825 clat percentiles (usec): 00:35:10.825 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:10.825 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.825 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:10.825 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32375], 99.95th=[32637], 00:35:10.825 | 99.99th=[32637] 00:35:10.825 bw ( KiB/s): min= 2043, max= 2304, per=4.15%, avg=2101.63, stdev=77.89, samples=19 00:35:10.825 iops : min= 510, max= 576, avg=525.37, stdev=19.51, samples=19 00:35:10.825 lat (msec) : 20=0.91%, 50=99.09% 00:35:10.825 cpu : usr=98.64%, sys=0.96%, ctx=12, majf=0, minf=9 00:35:10.825 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.825 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.825 filename0: (groupid=0, jobs=1): err= 0: pid=4037705: Tue Nov 26 19:36:32 2024 00:35:10.825 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:35:10.825 slat (nsec): min=8749, max=83353, avg=24382.94, stdev=10809.83 00:35:10.825 clat (usec): min=11525, max=32581, avg=30217.43, stdev=1751.76 00:35:10.825 lat (usec): min=11547, max=32594, avg=30241.81, stdev=1751.51 00:35:10.825 clat percentiles (usec): 00:35:10.825 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:10.825 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.825 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:10.825 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32375], 99.95th=[32637], 00:35:10.825 | 99.99th=[32637] 00:35:10.825 bw ( KiB/s): min= 2043, max= 2304, per=4.15%, avg=2101.63, stdev=77.89, samples=19 00:35:10.825 iops : min= 510, max= 576, avg=525.37, stdev=19.51, samples=19 00:35:10.825 lat (msec) : 20=0.91%, 50=99.09% 00:35:10.825 cpu : usr=98.39%, sys=1.21%, ctx=14, majf=0, minf=9 00:35:10.825 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.825 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.825 filename1: (groupid=0, jobs=1): err= 0: pid=4037706: Tue Nov 26 19:36:32 2024 00:35:10.825 read: IOPS=523, BW=2096KiB/s (2146kB/s)(20.5MiB/10016msec) 00:35:10.825 slat (nsec): min=7423, max=76168, avg=27223.98, stdev=16588.69 00:35:10.825 clat (usec): min=16230, max=41312, avg=30269.46, stdev=866.29 00:35:10.825 lat (usec): min=16239, max=41330, avg=30296.68, stdev=868.10 00:35:10.825 clat percentiles (usec): 00:35:10.825 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:10.825 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.825 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:10.825 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32900], 99.95th=[33162], 00:35:10.825 | 99.99th=[41157] 00:35:10.825 bw ( KiB/s): min= 2043, max= 2176, per=4.13%, avg=2092.55, stdev=62.84, samples=20 00:35:10.825 iops : min= 510, max= 544, avg=523.10, stdev=15.74, samples=20 00:35:10.825 lat (msec) : 20=0.34%, 50=99.66% 00:35:10.825 cpu : usr=98.78%, sys=0.84%, ctx=9, majf=0, minf=9 00:35:10.825 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.825 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.825 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.825 filename1: (groupid=0, jobs=1): err= 0: pid=4037707: Tue Nov 26 19:36:32 2024 00:35:10.825 read: IOPS=523, BW=2096KiB/s (2146kB/s)(20.5MiB/10017msec) 00:35:10.825 slat (nsec): min=5479, max=55603, avg=26615.21, stdev=8221.67 00:35:10.825 clat (usec): min=17501, max=41982, avg=30302.92, stdev=950.41 00:35:10.825 lat (usec): min=17523, max=41997, avg=30329.53, stdev=950.32 00:35:10.825 clat percentiles (usec): 00:35:10.825 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:10.825 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.825 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:10.825 | 99.00th=[31327], 99.50th=[32637], 99.90th=[33817], 99.95th=[34341], 00:35:10.825 | 99.99th=[42206] 00:35:10.826 bw ( KiB/s): min= 2048, max= 2176, per=4.12%, avg=2088.63, stdev=60.99, samples=19 00:35:10.826 iops : min= 512, max= 544, avg=522.16, stdev=15.25, samples=19 00:35:10.826 lat (msec) : 20=0.30%, 50=99.70% 00:35:10.826 cpu : usr=98.49%, sys=1.10%, ctx=19, majf=0, minf=9 00:35:10.826 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.826 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.826 filename1: (groupid=0, jobs=1): err= 0: pid=4037708: Tue Nov 26 19:36:32 2024 00:35:10.826 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10005msec) 00:35:10.826 slat (nsec): min=4191, max=52711, avg=25851.46, stdev=8258.70 00:35:10.826 clat (usec): min=17558, max=51200, avg=30356.46, stdev=1456.34 00:35:10.826 lat (usec): min=17573, max=51212, avg=30382.32, stdev=1455.68 00:35:10.826 clat percentiles (usec): 00:35:10.826 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:10.826 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.826 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:10.826 | 99.00th=[31327], 99.50th=[32900], 99.90th=[51119], 99.95th=[51119], 00:35:10.826 | 99.99th=[51119] 00:35:10.826 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2088.63, stdev=74.43, samples=19 00:35:10.826 iops : min= 480, max= 544, avg=522.16, stdev=18.61, samples=19 00:35:10.826 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:35:10.826 cpu : usr=98.65%, sys=0.96%, ctx=10, majf=0, minf=9 00:35:10.826 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.826 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.826 filename1: (groupid=0, jobs=1): err= 0: pid=4037709: Tue Nov 26 19:36:32 2024 00:35:10.826 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:35:10.826 slat (nsec): min=7555, max=89657, avg=31148.91, stdev=16223.62 00:35:10.826 clat (usec): min=10932, max=42106, avg=30182.54, stdev=1746.08 00:35:10.826 lat (usec): min=10951, max=42121, avg=30213.68, stdev=1746.37 00:35:10.826 clat percentiles (usec): 00:35:10.826 | 1.00th=[20579], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:10.826 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.826 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:10.826 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32375], 99.95th=[32375], 00:35:10.826 | 99.99th=[42206] 00:35:10.826 bw ( KiB/s): min= 2043, max= 2304, per=4.15%, avg=2101.63, stdev=77.89, samples=19 00:35:10.826 iops : min= 510, max= 576, avg=525.37, stdev=19.51, samples=19 00:35:10.826 lat (msec) : 20=0.95%, 50=99.05% 00:35:10.826 cpu : usr=98.57%, sys=1.04%, ctx=14, majf=0, minf=9 00:35:10.826 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.826 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.826 filename1: (groupid=0, jobs=1): err= 0: pid=4037710: Tue Nov 26 19:36:32 2024 00:35:10.826 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:35:10.826 slat (nsec): min=7624, max=86403, avg=21796.19, stdev=8738.23 00:35:10.826 clat (usec): min=11441, max=32624, avg=30234.09, stdev=1757.17 00:35:10.826 lat (usec): min=11464, max=32638, avg=30255.89, stdev=1756.95 00:35:10.826 clat percentiles (usec): 00:35:10.826 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:10.826 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:10.826 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:10.826 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32637], 99.95th=[32637], 00:35:10.826 | 99.99th=[32637] 00:35:10.826 bw ( KiB/s): min= 2043, max= 2304, per=4.15%, avg=2101.63, stdev=77.89, samples=19 00:35:10.826 iops : min= 510, max= 576, avg=525.37, stdev=19.51, samples=19 00:35:10.826 lat (msec) : 20=0.91%, 50=99.09% 00:35:10.826 cpu : usr=98.45%, sys=1.16%, ctx=10, majf=0, minf=9 00:35:10.826 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.826 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.826 filename1: (groupid=0, jobs=1): err= 0: pid=4037711: Tue Nov 26 19:36:32 2024 00:35:10.826 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10017msec) 00:35:10.826 slat (nsec): min=5259, max=50757, avg=22073.89, stdev=8744.98 00:35:10.826 clat (usec): min=10488, max=35323, avg=30330.44, stdev=1393.70 00:35:10.826 lat (usec): min=10497, max=35362, avg=30352.52, stdev=1393.90 00:35:10.826 clat percentiles (usec): 00:35:10.826 | 1.00th=[25560], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:10.826 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:10.826 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:10.826 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[35390], 00:35:10.826 | 99.99th=[35390] 00:35:10.826 bw ( KiB/s): min= 2048, max= 2224, per=4.13%, avg=2090.95, stdev=65.77, samples=19 00:35:10.826 iops : min= 512, max= 556, avg=522.74, stdev=16.44, samples=19 00:35:10.826 lat (msec) : 20=0.42%, 50=99.58% 00:35:10.826 cpu : usr=98.37%, sys=1.24%, ctx=12, majf=0, minf=9 00:35:10.826 IO depths : 1=5.3%, 2=11.3%, 4=24.3%, 8=51.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.826 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.826 filename1: (groupid=0, jobs=1): err= 0: pid=4037712: Tue Nov 26 19:36:32 2024 00:35:10.826 read: IOPS=527, BW=2109KiB/s (2159kB/s)(20.6MiB/10005msec) 00:35:10.826 slat (nsec): min=7377, max=91406, avg=38718.83, stdev=21747.16 00:35:10.826 clat (usec): min=15850, max=75722, avg=29991.64, stdev=2770.63 00:35:10.826 lat (usec): min=15876, max=75747, avg=30030.36, stdev=2772.22 00:35:10.826 clat percentiles (usec): 00:35:10.826 | 1.00th=[18744], 5.00th=[28443], 10.00th=[29492], 20.00th=[29754], 00:35:10.826 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:10.826 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:10.826 | 99.00th=[37487], 99.50th=[39060], 99.90th=[53740], 99.95th=[53740], 00:35:10.826 | 99.99th=[76022] 00:35:10.826 bw ( KiB/s): min= 2048, max= 2208, per=4.16%, avg=2106.53, stdev=63.49, samples=19 00:35:10.826 iops : min= 512, max= 552, avg=526.63, stdev=15.87, samples=19 00:35:10.826 lat (msec) : 20=1.48%, 50=98.22%, 100=0.30% 00:35:10.826 cpu : usr=98.43%, sys=1.18%, ctx=6, majf=0, minf=9 00:35:10.826 IO depths : 1=5.1%, 2=10.3%, 4=21.2%, 8=55.4%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 complete : 0=0.0%, 4=93.2%, 8=1.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 issued rwts: total=5274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.826 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.826 filename1: (groupid=0, jobs=1): err= 0: pid=4037713: Tue Nov 26 19:36:32 2024 00:35:10.826 read: IOPS=537, BW=2149KiB/s (2200kB/s)(21.0MiB/10007msec) 00:35:10.826 slat (nsec): min=7419, max=65139, avg=23586.26, stdev=13819.45 00:35:10.826 clat (usec): min=2469, max=33140, avg=29575.29, stdev=4358.59 00:35:10.826 lat (usec): min=2487, max=33173, avg=29598.87, stdev=4360.30 00:35:10.826 clat percentiles (usec): 00:35:10.826 | 1.00th=[ 2802], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:10.826 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.826 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:10.826 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32900], 99.95th=[33162], 00:35:10.826 | 99.99th=[33162] 00:35:10.826 bw ( KiB/s): min= 2043, max= 3072, per=4.24%, avg=2148.79, stdev=232.17, samples=19 00:35:10.826 iops : min= 510, max= 768, avg=537.16, stdev=58.06, samples=19 00:35:10.826 lat (msec) : 4=1.79%, 10=0.60%, 20=0.89%, 50=96.73% 00:35:10.826 cpu : usr=98.57%, sys=0.89%, ctx=47, majf=0, minf=9 00:35:10.826 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.826 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.826 filename2: (groupid=0, jobs=1): err= 0: pid=4037714: Tue Nov 26 19:36:32 2024 00:35:10.826 read: IOPS=590, BW=2360KiB/s (2417kB/s)(23.1MiB/10005msec) 00:35:10.826 slat (nsec): min=4439, max=87088, avg=20187.76, stdev=17814.85 00:35:10.826 clat (usec): min=8140, max=75346, avg=26973.00, stdev=5613.87 00:35:10.826 lat (usec): min=8148, max=75359, avg=26993.18, stdev=5619.88 00:35:10.826 clat percentiles (usec): 00:35:10.826 | 1.00th=[17171], 5.00th=[17957], 10.00th=[18220], 20.00th=[20841], 00:35:10.826 | 30.00th=[23200], 40.00th=[27657], 50.00th=[29754], 60.00th=[30278], 00:35:10.826 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[32375], 00:35:10.826 | 99.00th=[43254], 99.50th=[46924], 99.90th=[53740], 99.95th=[53740], 00:35:10.826 | 99.99th=[74974] 00:35:10.826 bw ( KiB/s): min= 1907, max= 2912, per=4.68%, avg=2367.58, stdev=298.29, samples=19 00:35:10.826 iops : min= 476, max= 728, avg=591.84, stdev=74.62, samples=19 00:35:10.826 lat (msec) : 10=0.07%, 20=11.55%, 50=88.11%, 100=0.27% 00:35:10.826 cpu : usr=98.52%, sys=1.06%, ctx=15, majf=0, minf=11 00:35:10.826 IO depths : 1=0.6%, 2=2.6%, 4=10.6%, 8=72.8%, 16=13.4%, 32=0.0%, >=64=0.0% 00:35:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 complete : 0=0.0%, 4=90.6%, 8=5.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.826 issued rwts: total=5904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.826 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.826 filename2: (groupid=0, jobs=1): err= 0: pid=4037715: Tue Nov 26 19:36:32 2024 00:35:10.826 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.5MiB/10018msec) 00:35:10.827 slat (nsec): min=4925, max=83551, avg=24862.04, stdev=8725.61 00:35:10.827 clat (usec): min=18418, max=37551, avg=30353.08, stdev=1119.10 00:35:10.827 lat (usec): min=18445, max=37585, avg=30377.94, stdev=1119.51 00:35:10.827 clat percentiles (usec): 00:35:10.827 | 1.00th=[25822], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:10.827 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.827 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:10.827 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:35:10.827 | 99.99th=[37487] 00:35:10.827 bw ( KiB/s): min= 2048, max= 2176, per=4.12%, avg=2088.42, stdev=59.48, samples=19 00:35:10.827 iops : min= 512, max= 544, avg=522.11, stdev=14.87, samples=19 00:35:10.827 lat (msec) : 20=0.27%, 50=99.73% 00:35:10.827 cpu : usr=98.58%, sys=1.01%, ctx=15, majf=0, minf=9 00:35:10.827 IO depths : 1=3.9%, 2=10.1%, 4=24.9%, 8=52.5%, 16=8.6%, 32=0.0%, >=64=0.0% 00:35:10.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 issued rwts: total=5246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.827 filename2: (groupid=0, jobs=1): err= 0: pid=4037716: Tue Nov 26 19:36:32 2024 00:35:10.827 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10006msec) 00:35:10.827 slat (nsec): min=4579, max=86640, avg=33550.97, stdev=16645.07 00:35:10.827 clat (usec): min=15996, max=66412, avg=30297.95, stdev=1760.09 00:35:10.827 lat (usec): min=16005, max=66427, avg=30331.50, stdev=1759.31 00:35:10.827 clat percentiles (usec): 00:35:10.827 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:10.827 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:10.827 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:10.827 | 99.00th=[31327], 99.50th=[32113], 99.90th=[54264], 99.95th=[54264], 00:35:10.827 | 99.99th=[66323] 00:35:10.827 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2088.63, stdev=74.43, samples=19 00:35:10.827 iops : min= 480, max= 544, avg=522.16, stdev=18.61, samples=19 00:35:10.827 lat (msec) : 20=0.34%, 50=99.35%, 100=0.31% 00:35:10.827 cpu : usr=98.10%, sys=1.04%, ctx=149, majf=0, minf=9 00:35:10.827 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.827 filename2: (groupid=0, jobs=1): err= 0: pid=4037717: Tue Nov 26 19:36:32 2024 00:35:10.827 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10006msec) 00:35:10.827 slat (nsec): min=4434, max=89769, avg=42284.97, stdev=19776.99 00:35:10.827 clat (usec): min=15848, max=75152, avg=30200.44, stdev=1844.98 00:35:10.827 lat (usec): min=15875, max=75165, avg=30242.73, stdev=1844.15 00:35:10.827 clat percentiles (usec): 00:35:10.827 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:35:10.827 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:10.827 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:10.827 | 99.00th=[31327], 99.50th=[31851], 99.90th=[54264], 99.95th=[54264], 00:35:10.827 | 99.99th=[74974] 00:35:10.827 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2088.63, stdev=74.43, samples=19 00:35:10.827 iops : min= 480, max= 544, avg=522.16, stdev=18.61, samples=19 00:35:10.827 lat (msec) : 20=0.34%, 50=99.35%, 100=0.31% 00:35:10.827 cpu : usr=98.72%, sys=0.89%, ctx=11, majf=0, minf=9 00:35:10.827 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.827 filename2: (groupid=0, jobs=1): err= 0: pid=4037718: Tue Nov 26 19:36:32 2024 00:35:10.827 read: IOPS=522, BW=2091KiB/s (2142kB/s)(20.4MiB/10007msec) 00:35:10.827 slat (nsec): min=4749, max=50794, avg=25536.02, stdev=8028.19 00:35:10.827 clat (usec): min=17527, max=53133, avg=30364.09, stdev=1541.38 00:35:10.827 lat (usec): min=17543, max=53146, avg=30389.63, stdev=1540.82 00:35:10.827 clat percentiles (usec): 00:35:10.827 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:10.827 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.827 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:10.827 | 99.00th=[31327], 99.50th=[32900], 99.90th=[53216], 99.95th=[53216], 00:35:10.827 | 99.99th=[53216] 00:35:10.827 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2088.16, stdev=74.71, samples=19 00:35:10.827 iops : min= 480, max= 544, avg=522.00, stdev=18.70, samples=19 00:35:10.827 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:35:10.827 cpu : usr=98.34%, sys=1.27%, ctx=8, majf=0, minf=9 00:35:10.827 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.827 filename2: (groupid=0, jobs=1): err= 0: pid=4037719: Tue Nov 26 19:36:32 2024 00:35:10.827 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10005msec) 00:35:10.827 slat (nsec): min=6798, max=52669, avg=23234.20, stdev=8932.13 00:35:10.827 clat (usec): min=22755, max=39236, avg=30408.76, stdev=782.65 00:35:10.827 lat (usec): min=22779, max=39255, avg=30432.00, stdev=781.36 00:35:10.827 clat percentiles (usec): 00:35:10.827 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:10.827 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:10.827 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:10.827 | 99.00th=[31327], 99.50th=[33424], 99.90th=[39060], 99.95th=[39060], 00:35:10.827 | 99.99th=[39060] 00:35:10.827 bw ( KiB/s): min= 2048, max= 2176, per=4.12%, avg=2088.16, stdev=60.74, samples=19 00:35:10.827 iops : min= 512, max= 544, avg=522.00, stdev=15.13, samples=19 00:35:10.827 lat (msec) : 50=100.00% 00:35:10.827 cpu : usr=98.62%, sys=1.00%, ctx=11, majf=0, minf=9 00:35:10.827 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:10.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.827 filename2: (groupid=0, jobs=1): err= 0: pid=4037720: Tue Nov 26 19:36:32 2024 00:35:10.827 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10016msec) 00:35:10.827 slat (nsec): min=7457, max=76037, avg=31387.03, stdev=17841.46 00:35:10.827 clat (usec): min=11033, max=43475, avg=30151.04, stdev=1554.61 00:35:10.827 lat (usec): min=11047, max=43523, avg=30182.43, stdev=1554.84 00:35:10.827 clat percentiles (usec): 00:35:10.827 | 1.00th=[23725], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:10.827 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.827 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:10.827 | 99.00th=[31327], 99.50th=[31589], 99.90th=[33162], 99.95th=[42730], 00:35:10.827 | 99.99th=[43254] 00:35:10.827 bw ( KiB/s): min= 2043, max= 2176, per=4.14%, avg=2098.95, stdev=64.55, samples=20 00:35:10.827 iops : min= 510, max= 544, avg=524.70, stdev=16.17, samples=20 00:35:10.827 lat (msec) : 20=0.99%, 50=99.01% 00:35:10.827 cpu : usr=98.73%, sys=0.86%, ctx=13, majf=0, minf=9 00:35:10.827 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.827 filename2: (groupid=0, jobs=1): err= 0: pid=4037721: Tue Nov 26 19:36:32 2024 00:35:10.827 read: IOPS=522, BW=2091KiB/s (2142kB/s)(20.4MiB/10007msec) 00:35:10.827 slat (usec): min=5, max=112, avg=29.82, stdev=15.90 00:35:10.827 clat (usec): min=17562, max=53578, avg=30322.79, stdev=1570.65 00:35:10.827 lat (usec): min=17577, max=53593, avg=30352.60, stdev=1569.03 00:35:10.827 clat percentiles (usec): 00:35:10.827 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:10.827 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:10.827 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:10.827 | 99.00th=[31327], 99.50th=[32900], 99.90th=[53740], 99.95th=[53740], 00:35:10.827 | 99.99th=[53740] 00:35:10.827 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2088.16, stdev=74.71, samples=19 00:35:10.827 iops : min= 480, max= 544, avg=522.00, stdev=18.70, samples=19 00:35:10.827 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:35:10.827 cpu : usr=98.56%, sys=1.05%, ctx=17, majf=0, minf=9 00:35:10.827 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.827 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.827 00:35:10.827 Run status group 0 (all jobs): 00:35:10.827 READ: bw=49.4MiB/s (51.8MB/s), 2091KiB/s-2360KiB/s (2142kB/s-2417kB/s), io=495MiB (520MB), run=10002-10023msec 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.827 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 bdev_null0 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 [2024-11-26 19:36:32.922573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 bdev_null1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:10.828 { 00:35:10.828 "params": { 00:35:10.828 "name": "Nvme$subsystem", 00:35:10.828 "trtype": "$TEST_TRANSPORT", 00:35:10.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.828 "adrfam": "ipv4", 00:35:10.828 "trsvcid": "$NVMF_PORT", 00:35:10.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.828 "hdgst": ${hdgst:-false}, 00:35:10.828 "ddgst": ${ddgst:-false} 00:35:10.828 }, 00:35:10.828 "method": "bdev_nvme_attach_controller" 00:35:10.828 } 00:35:10.828 EOF 00:35:10.828 )") 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:10.828 { 00:35:10.828 "params": { 00:35:10.828 "name": "Nvme$subsystem", 00:35:10.828 "trtype": "$TEST_TRANSPORT", 00:35:10.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.828 "adrfam": "ipv4", 00:35:10.828 "trsvcid": "$NVMF_PORT", 00:35:10.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.828 "hdgst": ${hdgst:-false}, 00:35:10.828 "ddgst": ${ddgst:-false} 00:35:10.828 }, 00:35:10.828 "method": "bdev_nvme_attach_controller" 00:35:10.828 } 00:35:10.828 EOF 00:35:10.828 )") 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.828 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:10.829 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:10.829 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:10.829 19:36:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:10.829 "params": { 00:35:10.829 "name": "Nvme0", 00:35:10.829 "trtype": "tcp", 00:35:10.829 "traddr": "10.0.0.2", 00:35:10.829 "adrfam": "ipv4", 00:35:10.829 "trsvcid": "4420", 00:35:10.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.829 "hdgst": false, 00:35:10.829 "ddgst": false 00:35:10.829 }, 00:35:10.829 "method": "bdev_nvme_attach_controller" 00:35:10.829 },{ 00:35:10.829 "params": { 00:35:10.829 "name": "Nvme1", 00:35:10.829 "trtype": "tcp", 00:35:10.829 "traddr": "10.0.0.2", 00:35:10.829 "adrfam": "ipv4", 00:35:10.829 "trsvcid": "4420", 00:35:10.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:10.829 "hdgst": false, 00:35:10.829 "ddgst": false 00:35:10.829 }, 00:35:10.829 "method": "bdev_nvme_attach_controller" 00:35:10.829 }' 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.829 19:36:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.829 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:10.829 ... 00:35:10.829 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:10.829 ... 00:35:10.829 fio-3.35 00:35:10.829 Starting 4 threads 00:35:16.178 00:35:16.178 filename0: (groupid=0, jobs=1): err= 0: pid=4039746: Tue Nov 26 19:36:39 2024 00:35:16.178 read: IOPS=2746, BW=21.5MiB/s (22.5MB/s)(107MiB/5002msec) 00:35:16.178 slat (nsec): min=6040, max=38855, avg=9060.10, stdev=3307.50 00:35:16.178 clat (usec): min=726, max=5231, avg=2885.71, stdev=386.64 00:35:16.178 lat (usec): min=743, max=5241, avg=2894.77, stdev=386.59 00:35:16.178 clat percentiles (usec): 00:35:16.178 | 1.00th=[ 1762], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2638], 00:35:16.178 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:35:16.178 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3228], 95.00th=[ 3458], 00:35:16.178 | 99.00th=[ 3982], 99.50th=[ 4293], 99.90th=[ 4883], 99.95th=[ 5014], 00:35:16.178 | 99.99th=[ 5211] 00:35:16.178 bw ( KiB/s): min=21376, max=22768, per=26.22%, avg=22010.67, stdev=475.45, samples=9 00:35:16.178 iops : min= 2672, max= 2846, avg=2751.33, stdev=59.43, samples=9 00:35:16.178 lat (usec) : 750=0.01%, 1000=0.18% 00:35:16.178 lat (msec) : 2=1.94%, 4=96.89%, 10=0.98% 00:35:16.178 cpu : usr=95.34%, sys=4.34%, ctx=11, majf=0, minf=9 00:35:16.178 IO depths : 1=0.3%, 2=6.3%, 4=64.8%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.178 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.178 issued rwts: total=13738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.178 filename0: (groupid=0, jobs=1): err= 0: pid=4039747: Tue Nov 26 19:36:39 2024 00:35:16.178 read: IOPS=2595, BW=20.3MiB/s (21.3MB/s)(101MiB/5002msec) 00:35:16.178 slat (nsec): min=6052, max=39400, avg=9002.18, stdev=3371.48 00:35:16.178 clat (usec): min=542, max=5575, avg=3055.65, stdev=464.66 00:35:16.178 lat (usec): min=554, max=5581, avg=3064.65, stdev=464.58 00:35:16.178 clat percentiles (usec): 00:35:16.178 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2802], 00:35:16.178 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:35:16.178 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3982], 00:35:16.178 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5407], 99.95th=[ 5407], 00:35:16.178 | 99.99th=[ 5538] 00:35:16.178 bw ( KiB/s): min=20112, max=21328, per=24.73%, avg=20762.67, stdev=488.98, samples=9 00:35:16.178 iops : min= 2514, max= 2666, avg=2595.33, stdev=61.12, samples=9 00:35:16.178 lat (usec) : 750=0.01% 00:35:16.178 lat (msec) : 2=0.82%, 4=94.35%, 10=4.81% 00:35:16.178 cpu : usr=95.76%, sys=3.94%, ctx=7, majf=0, minf=9 00:35:16.178 IO depths : 1=0.2%, 2=2.9%, 4=68.8%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.178 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.178 issued rwts: total=12984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.178 filename1: (groupid=0, jobs=1): err= 0: pid=4039748: Tue Nov 26 19:36:39 2024 00:35:16.178 read: IOPS=2684, BW=21.0MiB/s (22.0MB/s)(105MiB/5002msec) 00:35:16.178 slat (nsec): min=6031, max=41741, avg=9207.56, stdev=3436.74 00:35:16.178 clat (usec): min=821, max=5494, avg=2952.63, stdev=440.22 00:35:16.178 lat (usec): min=832, max=5507, avg=2961.84, stdev=440.11 00:35:16.178 clat percentiles (usec): 00:35:16.178 | 1.00th=[ 1876], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2671], 00:35:16.178 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:35:16.178 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3359], 95.00th=[ 3720], 00:35:16.178 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5342], 00:35:16.178 | 99.99th=[ 5473] 00:35:16.178 bw ( KiB/s): min=21008, max=21984, per=25.55%, avg=21448.89, stdev=296.77, samples=9 00:35:16.178 iops : min= 2626, max= 2748, avg=2681.11, stdev=37.10, samples=9 00:35:16.178 lat (usec) : 1000=0.01% 00:35:16.178 lat (msec) : 2=1.47%, 4=95.62%, 10=2.89% 00:35:16.178 cpu : usr=95.62%, sys=4.06%, ctx=7, majf=0, minf=9 00:35:16.178 IO depths : 1=0.2%, 2=5.4%, 4=65.7%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.178 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.178 issued rwts: total=13427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.178 filename1: (groupid=0, jobs=1): err= 0: pid=4039749: Tue Nov 26 19:36:39 2024 00:35:16.178 read: IOPS=2530, BW=19.8MiB/s (20.7MB/s)(99.7MiB/5042msec) 00:35:16.178 slat (nsec): min=6090, max=40096, avg=8923.10, stdev=3319.23 00:35:16.178 clat (usec): min=945, max=42468, avg=3119.62, stdev=774.86 00:35:16.178 lat (usec): min=954, max=42478, avg=3128.54, stdev=774.71 00:35:16.178 clat percentiles (usec): 00:35:16.178 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2900], 00:35:16.178 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:35:16.178 | 70.00th=[ 3097], 80.00th=[ 3294], 90.00th=[ 3654], 95.00th=[ 4228], 00:35:16.178 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5473], 00:35:16.178 | 99.99th=[42206] 00:35:16.178 bw ( KiB/s): min=19607, max=21216, per=24.31%, avg=20411.90, stdev=589.57, samples=10 00:35:16.179 iops : min= 2450, max= 2652, avg=2551.40, stdev=73.83, samples=10 00:35:16.179 lat (usec) : 1000=0.02% 00:35:16.179 lat (msec) : 2=0.64%, 4=92.91%, 10=6.40%, 50=0.02% 00:35:16.179 cpu : usr=95.99%, sys=3.69%, ctx=7, majf=0, minf=9 00:35:16.179 IO depths : 1=0.1%, 2=3.3%, 4=68.2%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.179 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.179 issued rwts: total=12758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.179 00:35:16.179 Run status group 0 (all jobs): 00:35:16.179 READ: bw=82.0MiB/s (86.0MB/s), 19.8MiB/s-21.5MiB/s (20.7MB/s-22.5MB/s), io=413MiB (433MB), run=5002-5042msec 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.179 00:35:16.179 real 0m24.697s 00:35:16.179 user 4m52.013s 00:35:16.179 sys 0m5.301s 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.179 19:36:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.179 ************************************ 00:35:16.179 END TEST fio_dif_rand_params 00:35:16.179 ************************************ 00:35:16.179 19:36:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:16.179 19:36:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:16.179 19:36:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.179 19:36:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.454 ************************************ 00:35:16.454 START TEST fio_dif_digest 00:35:16.454 ************************************ 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.454 bdev_null0 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.454 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.455 [2024-11-26 19:36:39.353841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:16.455 { 00:35:16.455 "params": { 00:35:16.455 "name": "Nvme$subsystem", 00:35:16.455 "trtype": "$TEST_TRANSPORT", 00:35:16.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:16.455 "adrfam": "ipv4", 00:35:16.455 "trsvcid": "$NVMF_PORT", 00:35:16.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:16.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:16.455 "hdgst": ${hdgst:-false}, 00:35:16.455 "ddgst": ${ddgst:-false} 00:35:16.455 }, 00:35:16.455 "method": "bdev_nvme_attach_controller" 00:35:16.455 } 00:35:16.455 EOF 00:35:16.455 )") 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:16.455 "params": { 00:35:16.455 "name": "Nvme0", 00:35:16.455 "trtype": "tcp", 00:35:16.455 "traddr": "10.0.0.2", 00:35:16.455 "adrfam": "ipv4", 00:35:16.455 "trsvcid": "4420", 00:35:16.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:16.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:16.455 "hdgst": true, 00:35:16.455 "ddgst": true 00:35:16.455 }, 00:35:16.455 "method": "bdev_nvme_attach_controller" 00:35:16.455 }' 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:16.455 19:36:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.778 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:16.778 ... 00:35:16.778 fio-3.35 00:35:16.778 Starting 3 threads 00:35:28.986 00:35:28.986 filename0: (groupid=0, jobs=1): err= 0: pid=4040936: Tue Nov 26 19:36:50 2024 00:35:28.986 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(361MiB/10048msec) 00:35:28.986 slat (nsec): min=6348, max=34306, avg=11902.02, stdev=1940.73 00:35:28.986 clat (usec): min=5101, max=50901, avg=10413.14, stdev=1311.20 00:35:28.986 lat (usec): min=5111, max=50913, avg=10425.04, stdev=1311.11 00:35:28.986 clat percentiles (usec): 00:35:28.986 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:35:28.986 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:35:28.986 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:35:28.986 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13566], 99.95th=[49021], 00:35:28.986 | 99.99th=[51119] 00:35:28.986 bw ( KiB/s): min=35072, max=37888, per=35.03%, avg=36928.00, stdev=830.05, samples=20 00:35:28.986 iops : min= 274, max= 296, avg=288.50, stdev= 6.48, samples=20 00:35:28.986 lat (msec) : 10=29.86%, 20=70.07%, 50=0.03%, 100=0.03% 00:35:28.986 cpu : usr=94.55%, sys=5.16%, ctx=16, majf=0, minf=54 00:35:28.986 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.986 issued rwts: total=2887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.986 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.986 filename0: (groupid=0, jobs=1): err= 0: pid=4040938: Tue Nov 26 19:36:50 2024 00:35:28.986 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(337MiB/10044msec) 00:35:28.986 slat (nsec): min=6319, max=25012, avg=11724.04, stdev=1728.02 00:35:28.986 clat (usec): min=8101, max=51730, avg=11164.65, stdev=1882.74 00:35:28.986 lat (usec): min=8111, max=51755, avg=11176.38, stdev=1882.84 00:35:28.986 clat percentiles (usec): 00:35:28.986 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:35:28.986 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:35:28.986 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:35:28.986 | 99.00th=[13173], 99.50th=[13435], 99.90th=[51643], 99.95th=[51643], 00:35:28.986 | 99.99th=[51643] 00:35:28.986 bw ( KiB/s): min=31744, max=36352, per=32.66%, avg=34432.00, stdev=1015.54, samples=20 00:35:28.986 iops : min= 248, max= 284, avg=269.00, stdev= 7.93, samples=20 00:35:28.986 lat (msec) : 10=7.24%, 20=92.57%, 50=0.04%, 100=0.15% 00:35:28.986 cpu : usr=94.67%, sys=5.03%, ctx=12, majf=0, minf=27 00:35:28.986 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.986 issued rwts: total=2692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.986 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.986 filename0: (groupid=0, jobs=1): err= 0: pid=4040939: Tue Nov 26 19:36:50 2024 00:35:28.986 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(337MiB/10044msec) 00:35:28.986 slat (nsec): min=6398, max=55822, avg=12024.10, stdev=1919.55 00:35:28.986 clat (usec): min=7056, max=49396, avg=11143.37, stdev=1271.70 00:35:28.986 lat (usec): min=7064, max=49409, avg=11155.40, stdev=1271.66 00:35:28.986 clat percentiles (usec): 00:35:28.986 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:35:28.986 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:35:28.986 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:35:28.986 | 99.00th=[13042], 99.50th=[13566], 99.90th=[14877], 99.95th=[45351], 00:35:28.986 | 99.99th=[49546] 00:35:28.986 bw ( KiB/s): min=33024, max=35328, per=32.72%, avg=34496.00, stdev=603.95, samples=20 00:35:28.986 iops : min= 258, max= 276, avg=269.50, stdev= 4.72, samples=20 00:35:28.986 lat (msec) : 10=6.41%, 20=93.51%, 50=0.07% 00:35:28.987 cpu : usr=94.79%, sys=4.91%, ctx=17, majf=0, minf=115 00:35:28.987 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.987 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.987 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.987 00:35:28.987 Run status group 0 (all jobs): 00:35:28.987 READ: bw=103MiB/s (108MB/s), 33.5MiB/s-35.9MiB/s (35.1MB/s-37.7MB/s), io=1035MiB (1085MB), run=10044-10048msec 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.987 00:35:28.987 real 0m11.238s 00:35:28.987 user 0m35.512s 00:35:28.987 sys 0m1.804s 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.987 19:36:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.987 ************************************ 00:35:28.987 END TEST fio_dif_digest 00:35:28.987 ************************************ 00:35:28.987 19:36:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:28.987 19:36:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.987 rmmod nvme_tcp 00:35:28.987 rmmod nvme_fabrics 00:35:28.987 rmmod nvme_keyring 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 4029251 ']' 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 4029251 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 4029251 ']' 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 4029251 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4029251 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4029251' 00:35:28.987 killing process with pid 4029251 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@973 -- # kill 4029251 00:35:28.987 19:36:50 nvmf_dif -- common/autotest_common.sh@978 -- # wait 4029251 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:28.987 19:36:50 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:30.894 Waiting for block devices as requested 00:35:30.894 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:30.894 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:30.894 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:30.894 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:30.894 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:30.894 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:31.153 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:31.153 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:31.153 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:31.412 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:31.412 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.412 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.671 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:31.671 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:31.671 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:31.671 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:31.930 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.930 19:36:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.930 19:36:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:31.930 19:36:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.466 19:36:57 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.467 00:35:34.467 real 1m14.445s 00:35:34.467 user 7m9.804s 00:35:34.467 sys 0m20.954s 00:35:34.467 19:36:57 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.467 19:36:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:34.467 ************************************ 00:35:34.467 END TEST nvmf_dif 00:35:34.467 ************************************ 00:35:34.467 19:36:57 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:34.467 19:36:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:34.467 19:36:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.467 19:36:57 -- common/autotest_common.sh@10 -- # set +x 00:35:34.467 ************************************ 00:35:34.467 START TEST nvmf_abort_qd_sizes 00:35:34.467 ************************************ 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:34.467 * Looking for test storage... 00:35:34.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:34.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.467 --rc genhtml_branch_coverage=1 00:35:34.467 --rc genhtml_function_coverage=1 00:35:34.467 --rc genhtml_legend=1 00:35:34.467 --rc geninfo_all_blocks=1 00:35:34.467 --rc geninfo_unexecuted_blocks=1 00:35:34.467 00:35:34.467 ' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:34.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.467 --rc genhtml_branch_coverage=1 00:35:34.467 --rc genhtml_function_coverage=1 00:35:34.467 --rc genhtml_legend=1 00:35:34.467 --rc geninfo_all_blocks=1 00:35:34.467 --rc geninfo_unexecuted_blocks=1 00:35:34.467 00:35:34.467 ' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:34.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.467 --rc genhtml_branch_coverage=1 00:35:34.467 --rc genhtml_function_coverage=1 00:35:34.467 --rc genhtml_legend=1 00:35:34.467 --rc geninfo_all_blocks=1 00:35:34.467 --rc geninfo_unexecuted_blocks=1 00:35:34.467 00:35:34.467 ' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:34.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.467 --rc genhtml_branch_coverage=1 00:35:34.467 --rc genhtml_function_coverage=1 00:35:34.467 --rc genhtml_legend=1 00:35:34.467 --rc geninfo_all_blocks=1 00:35:34.467 --rc geninfo_unexecuted_blocks=1 00:35:34.467 00:35:34.467 ' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:34.467 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:34.468 19:36:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:39.738 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:39.738 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:39.738 Found net devices under 0000:86:00.0: cvl_0_0 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:39.738 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:39.739 Found net devices under 0000:86:00.1: cvl_0_1 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:39.739 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:39.997 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:39.998 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:39.998 19:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:39.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:39.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:35:39.998 00:35:39.998 --- 10.0.0.2 ping statistics --- 00:35:39.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.998 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:39.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:39.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:35:39.998 00:35:39.998 --- 10.0.0.1 ping statistics --- 00:35:39.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.998 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:39.998 19:37:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:43.285 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:43.285 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:44.222 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=4048829 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 4048829 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 4048829 ']' 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.481 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:44.481 [2024-11-26 19:37:07.530490] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:35:44.481 [2024-11-26 19:37:07.530534] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.739 [2024-11-26 19:37:07.610531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:44.739 [2024-11-26 19:37:07.656056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.739 [2024-11-26 19:37:07.656092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.740 [2024-11-26 19:37:07.656099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.740 [2024-11-26 19:37:07.656105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.740 [2024-11-26 19:37:07.656110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.740 [2024-11-26 19:37:07.657696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.740 [2024-11-26 19:37:07.657749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.740 [2024-11-26 19:37:07.657856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.740 [2024-11-26 19:37:07.657858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:44.740 19:37:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:44.740 ************************************ 00:35:44.740 START TEST spdk_target_abort 00:35:44.740 ************************************ 00:35:44.740 19:37:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:44.740 19:37:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:44.740 19:37:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:44.740 19:37:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.740 19:37:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.023 spdk_targetn1 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.023 [2024-11-26 19:37:10.678660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.023 [2024-11-26 19:37:10.731143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.023 19:37:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.302 Initializing NVMe Controllers 00:35:51.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:51.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:51.302 Initialization complete. Launching workers. 00:35:51.302 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16803, failed: 0 00:35:51.302 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1414, failed to submit 15389 00:35:51.302 success 765, unsuccessful 649, failed 0 00:35:51.302 19:37:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:51.302 19:37:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:54.586 Initializing NVMe Controllers 00:35:54.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:54.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:54.586 Initialization complete. Launching workers. 00:35:54.586 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8602, failed: 0 00:35:54.586 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1244, failed to submit 7358 00:35:54.586 success 312, unsuccessful 932, failed 0 00:35:54.586 19:37:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:54.586 19:37:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.871 Initializing NVMe Controllers 00:35:57.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:57.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:57.871 Initialization complete. Launching workers. 00:35:57.871 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38596, failed: 0 00:35:57.871 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2906, failed to submit 35690 00:35:57.871 success 588, unsuccessful 2318, failed 0 00:35:57.872 19:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:57.872 19:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.872 19:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.872 19:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.872 19:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:57.872 19:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.872 19:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4048829 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 4048829 ']' 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 4048829 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4048829 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4048829' 00:35:59.773 killing process with pid 4048829 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 4048829 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 4048829 00:35:59.773 00:35:59.773 real 0m14.913s 00:35:59.773 user 0m56.788s 00:35:59.773 sys 0m2.757s 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:59.773 ************************************ 00:35:59.773 END TEST spdk_target_abort 00:35:59.773 ************************************ 00:35:59.773 19:37:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:59.773 19:37:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:59.773 19:37:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.773 19:37:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:59.773 ************************************ 00:35:59.773 START TEST kernel_target_abort 00:35:59.773 ************************************ 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:59.773 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:59.774 19:37:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:03.058 Waiting for block devices as requested 00:36:03.058 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:03.058 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:03.058 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:03.058 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:03.058 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:03.058 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:03.058 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:03.058 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:03.317 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:03.317 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:03.317 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:03.576 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:03.576 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:03.576 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:03.834 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:03.834 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:03.834 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:04.092 19:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:04.092 No valid GPT data, bailing 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:04.092 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:04.092 00:36:04.092 Discovery Log Number of Records 2, Generation counter 2 00:36:04.092 =====Discovery Log Entry 0====== 00:36:04.092 trtype: tcp 00:36:04.092 adrfam: ipv4 00:36:04.092 subtype: current discovery subsystem 00:36:04.093 treq: not specified, sq flow control disable supported 00:36:04.093 portid: 1 00:36:04.093 trsvcid: 4420 00:36:04.093 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:04.093 traddr: 10.0.0.1 00:36:04.093 eflags: none 00:36:04.093 sectype: none 00:36:04.093 =====Discovery Log Entry 1====== 00:36:04.093 trtype: tcp 00:36:04.093 adrfam: ipv4 00:36:04.093 subtype: nvme subsystem 00:36:04.093 treq: not specified, sq flow control disable supported 00:36:04.093 portid: 1 00:36:04.093 trsvcid: 4420 00:36:04.093 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:04.093 traddr: 10.0.0.1 00:36:04.093 eflags: none 00:36:04.093 sectype: none 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:04.093 19:37:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:07.370 Initializing NVMe Controllers 00:36:07.370 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:07.370 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:07.370 Initialization complete. Launching workers. 00:36:07.370 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93965, failed: 0 00:36:07.370 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93965, failed to submit 0 00:36:07.370 success 0, unsuccessful 93965, failed 0 00:36:07.370 19:37:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:07.370 19:37:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:10.652 Initializing NVMe Controllers 00:36:10.652 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:10.652 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:10.652 Initialization complete. Launching workers. 00:36:10.652 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 148508, failed: 0 00:36:10.652 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37422, failed to submit 111086 00:36:10.652 success 0, unsuccessful 37422, failed 0 00:36:10.652 19:37:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:10.652 19:37:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:13.939 Initializing NVMe Controllers 00:36:13.939 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:13.939 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:13.939 Initialization complete. Launching workers. 00:36:13.939 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141095, failed: 0 00:36:13.939 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35346, failed to submit 105749 00:36:13.939 success 0, unsuccessful 35346, failed 0 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:13.939 19:37:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.475 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:16.475 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:17.852 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:18.112 00:36:18.112 real 0m18.184s 00:36:18.112 user 0m9.107s 00:36:18.112 sys 0m5.132s 00:36:18.112 19:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:18.112 19:37:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.112 ************************************ 00:36:18.112 END TEST kernel_target_abort 00:36:18.112 ************************************ 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:18.112 rmmod nvme_tcp 00:36:18.112 rmmod nvme_fabrics 00:36:18.112 rmmod nvme_keyring 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 4048829 ']' 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 4048829 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 4048829 ']' 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 4048829 00:36:18.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4048829) - No such process 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 4048829 is not found' 00:36:18.112 Process with pid 4048829 is not found 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:18.112 19:37:41 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:21.405 Waiting for block devices as requested 00:36:21.405 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:21.405 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:21.405 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:21.405 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:21.405 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:21.405 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:21.405 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:21.406 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:21.406 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:21.665 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:21.665 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:21.665 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:21.923 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:21.923 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:21.923 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:21.923 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:22.182 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:22.182 19:37:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.718 19:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.718 00:36:24.718 real 0m50.184s 00:36:24.718 user 1m10.142s 00:36:24.718 sys 0m16.653s 00:36:24.718 19:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.718 19:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:24.718 ************************************ 00:36:24.718 END TEST nvmf_abort_qd_sizes 00:36:24.718 ************************************ 00:36:24.718 19:37:47 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:24.718 19:37:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:24.718 19:37:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.718 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:36:24.718 ************************************ 00:36:24.718 START TEST keyring_file 00:36:24.718 ************************************ 00:36:24.718 19:37:47 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:24.718 * Looking for test storage... 00:36:24.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:24.718 19:37:47 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.718 19:37:47 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.718 19:37:47 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.718 19:37:47 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:24.718 19:37:47 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.719 19:37:47 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.719 19:37:47 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.719 19:37:47 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:24.719 19:37:47 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.719 19:37:47 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.719 --rc genhtml_branch_coverage=1 00:36:24.719 --rc genhtml_function_coverage=1 00:36:24.719 --rc genhtml_legend=1 00:36:24.719 --rc geninfo_all_blocks=1 00:36:24.719 --rc geninfo_unexecuted_blocks=1 00:36:24.719 00:36:24.719 ' 00:36:24.719 19:37:47 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.719 --rc genhtml_branch_coverage=1 00:36:24.719 --rc genhtml_function_coverage=1 00:36:24.719 --rc genhtml_legend=1 00:36:24.719 --rc geninfo_all_blocks=1 00:36:24.719 --rc geninfo_unexecuted_blocks=1 00:36:24.719 00:36:24.719 ' 00:36:24.719 19:37:47 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.719 --rc genhtml_branch_coverage=1 00:36:24.719 --rc genhtml_function_coverage=1 00:36:24.719 --rc genhtml_legend=1 00:36:24.719 --rc geninfo_all_blocks=1 00:36:24.719 --rc geninfo_unexecuted_blocks=1 00:36:24.719 00:36:24.719 ' 00:36:24.719 19:37:47 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.719 --rc genhtml_branch_coverage=1 00:36:24.719 --rc genhtml_function_coverage=1 00:36:24.719 --rc genhtml_legend=1 00:36:24.719 --rc geninfo_all_blocks=1 00:36:24.719 --rc geninfo_unexecuted_blocks=1 00:36:24.719 00:36:24.719 ' 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.719 19:37:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.719 19:37:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.719 19:37:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.719 19:37:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.719 19:37:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.719 19:37:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.719 19:37:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.719 19:37:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:24.719 19:37:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:24.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.U4DNYBpbKs 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.U4DNYBpbKs 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.U4DNYBpbKs 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.U4DNYBpbKs 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TU7WaWE7ox 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:24.719 19:37:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TU7WaWE7ox 00:36:24.719 19:37:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TU7WaWE7ox 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TU7WaWE7ox 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=4057621 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:24.719 19:37:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4057621 00:36:24.720 19:37:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4057621 ']' 00:36:24.720 19:37:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.720 19:37:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.720 19:37:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.720 19:37:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.720 19:37:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:24.720 [2024-11-26 19:37:47.705259] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:36:24.720 [2024-11-26 19:37:47.705313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057621 ] 00:36:24.720 [2024-11-26 19:37:47.780379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.720 [2024-11-26 19:37:47.822462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.978 19:37:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.978 19:37:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:24.978 19:37:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:24.978 19:37:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.978 19:37:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:24.978 [2024-11-26 19:37:48.035159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.978 null0 00:36:24.978 [2024-11-26 19:37:48.067216] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:24.978 [2024-11-26 19:37:48.067570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:24.978 19:37:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.978 19:37:48 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:24.978 19:37:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:24.978 19:37:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:24.978 19:37:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:25.236 19:37:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:25.236 19:37:48 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:25.236 19:37:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:25.236 19:37:48 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:25.236 19:37:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:25.237 [2024-11-26 19:37:48.095278] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:25.237 request: 00:36:25.237 { 00:36:25.237 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.237 "secure_channel": false, 00:36:25.237 "listen_address": { 00:36:25.237 "trtype": "tcp", 00:36:25.237 "traddr": "127.0.0.1", 00:36:25.237 "trsvcid": "4420" 00:36:25.237 }, 00:36:25.237 "method": "nvmf_subsystem_add_listener", 00:36:25.237 "req_id": 1 00:36:25.237 } 00:36:25.237 Got JSON-RPC error response 00:36:25.237 response: 00:36:25.237 { 00:36:25.237 "code": -32602, 00:36:25.237 "message": "Invalid parameters" 00:36:25.237 } 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:25.237 19:37:48 keyring_file -- keyring/file.sh@47 -- # bperfpid=4057628 00:36:25.237 19:37:48 keyring_file -- keyring/file.sh@49 -- # waitforlisten 4057628 /var/tmp/bperf.sock 00:36:25.237 19:37:48 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4057628 ']' 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:25.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.237 19:37:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:25.237 [2024-11-26 19:37:48.149900] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:36:25.237 [2024-11-26 19:37:48.149940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057628 ] 00:36:25.237 [2024-11-26 19:37:48.223828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.237 [2024-11-26 19:37:48.267166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.495 19:37:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.495 19:37:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:25.495 19:37:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.U4DNYBpbKs 00:36:25.495 19:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.U4DNYBpbKs 00:36:25.495 19:37:48 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TU7WaWE7ox 00:36:25.495 19:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TU7WaWE7ox 00:36:25.754 19:37:48 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:25.754 19:37:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:25.754 19:37:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.754 19:37:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.754 19:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.012 19:37:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.U4DNYBpbKs == \/\t\m\p\/\t\m\p\.\U\4\D\N\Y\B\p\b\K\s ]] 00:36:26.012 19:37:48 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:26.012 19:37:48 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:26.012 19:37:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.012 19:37:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:26.012 19:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.269 19:37:49 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.TU7WaWE7ox == \/\t\m\p\/\t\m\p\.\T\U\7\W\a\W\E\7\o\x ]] 00:36:26.269 19:37:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:26.269 19:37:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:26.269 19:37:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:26.269 19:37:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.269 19:37:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:26.269 19:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.269 19:37:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:26.269 19:37:49 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:26.269 19:37:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:26.270 19:37:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:26.270 19:37:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.270 19:37:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:26.270 19:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.527 19:37:49 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:26.527 19:37:49 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.527 19:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.785 [2024-11-26 19:37:49.667229] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:26.785 nvme0n1 00:36:26.785 19:37:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:26.785 19:37:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:26.785 19:37:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:26.785 19:37:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.785 19:37:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:26.785 19:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.044 19:37:49 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:27.044 19:37:49 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:27.044 19:37:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:27.044 19:37:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:27.044 19:37:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.044 19:37:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:27.044 19:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.044 19:37:50 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:27.044 19:37:50 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:27.302 Running I/O for 1 seconds... 00:36:28.238 19216.00 IOPS, 75.06 MiB/s 00:36:28.238 Latency(us) 00:36:28.238 [2024-11-26T18:37:51.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.238 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:28.238 nvme0n1 : 1.00 19265.13 75.25 0.00 0.00 6632.27 4088.20 14542.75 00:36:28.238 [2024-11-26T18:37:51.352Z] =================================================================================================================== 00:36:28.238 [2024-11-26T18:37:51.352Z] Total : 19265.13 75.25 0.00 0.00 6632.27 4088.20 14542.75 00:36:28.238 { 00:36:28.238 "results": [ 00:36:28.238 { 00:36:28.238 "job": "nvme0n1", 00:36:28.238 "core_mask": "0x2", 00:36:28.238 "workload": "randrw", 00:36:28.238 "percentage": 50, 00:36:28.238 "status": "finished", 00:36:28.238 "queue_depth": 128, 00:36:28.238 "io_size": 4096, 00:36:28.238 "runtime": 1.004094, 00:36:28.238 "iops": 19265.128563660375, 00:36:28.238 "mibps": 75.25440845179834, 00:36:28.238 "io_failed": 0, 00:36:28.238 "io_timeout": 0, 00:36:28.238 "avg_latency_us": 6632.269206349207, 00:36:28.238 "min_latency_us": 4088.1980952380954, 00:36:28.238 "max_latency_us": 14542.750476190477 00:36:28.238 } 00:36:28.238 ], 00:36:28.238 "core_count": 1 00:36:28.238 } 00:36:28.238 19:37:51 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:28.238 19:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:28.497 19:37:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:28.497 19:37:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:28.497 19:37:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.497 19:37:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.497 19:37:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.497 19:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.755 19:37:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:28.755 19:37:51 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:28.755 19:37:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:28.755 19:37:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.755 19:37:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.755 19:37:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:28.755 19:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.755 19:37:51 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:28.755 19:37:51 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:28.755 19:37:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:28.755 19:37:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:28.755 19:37:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:28.755 19:37:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:28.755 19:37:51 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:28.755 19:37:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:28.755 19:37:51 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:28.755 19:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:29.014 [2024-11-26 19:37:52.029978] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:29.015 [2024-11-26 19:37:52.030699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1653210 (107): Transport endpoint is not connected 00:36:29.015 [2024-11-26 19:37:52.031692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1653210 (9): Bad file descriptor 00:36:29.015 [2024-11-26 19:37:52.032693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:29.015 [2024-11-26 19:37:52.032702] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:29.015 [2024-11-26 19:37:52.032709] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:29.015 [2024-11-26 19:37:52.032717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:29.015 request: 00:36:29.015 { 00:36:29.015 "name": "nvme0", 00:36:29.015 "trtype": "tcp", 00:36:29.015 "traddr": "127.0.0.1", 00:36:29.015 "adrfam": "ipv4", 00:36:29.015 "trsvcid": "4420", 00:36:29.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.015 "prchk_reftag": false, 00:36:29.015 "prchk_guard": false, 00:36:29.015 "hdgst": false, 00:36:29.015 "ddgst": false, 00:36:29.015 "psk": "key1", 00:36:29.015 "allow_unrecognized_csi": false, 00:36:29.015 "method": "bdev_nvme_attach_controller", 00:36:29.015 "req_id": 1 00:36:29.015 } 00:36:29.015 Got JSON-RPC error response 00:36:29.015 response: 00:36:29.015 { 00:36:29.015 "code": -5, 00:36:29.015 "message": "Input/output error" 00:36:29.015 } 00:36:29.015 19:37:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:29.015 19:37:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:29.015 19:37:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:29.015 19:37:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:29.015 19:37:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:29.015 19:37:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:29.015 19:37:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.015 19:37:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.015 19:37:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:29.015 19:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.271 19:37:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:29.271 19:37:52 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:29.271 19:37:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:29.271 19:37:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.271 19:37:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.271 19:37:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:29.271 19:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.529 19:37:52 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:29.529 19:37:52 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:29.529 19:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:29.529 19:37:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:29.529 19:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:29.787 19:37:52 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:29.787 19:37:52 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:29.788 19:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.046 19:37:53 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:30.046 19:37:53 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.U4DNYBpbKs 00:36:30.046 19:37:53 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.U4DNYBpbKs 00:36:30.046 19:37:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:30.046 19:37:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.U4DNYBpbKs 00:36:30.046 19:37:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:30.046 19:37:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:30.046 19:37:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:30.046 19:37:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:30.046 19:37:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.U4DNYBpbKs 00:36:30.046 19:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.U4DNYBpbKs 00:36:30.303 [2024-11-26 19:37:53.193760] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.U4DNYBpbKs': 0100660 00:36:30.303 [2024-11-26 19:37:53.193788] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:30.303 request: 00:36:30.303 { 00:36:30.303 "name": "key0", 00:36:30.303 "path": "/tmp/tmp.U4DNYBpbKs", 00:36:30.303 "method": "keyring_file_add_key", 00:36:30.303 "req_id": 1 00:36:30.303 } 00:36:30.303 Got JSON-RPC error response 00:36:30.303 response: 00:36:30.303 { 00:36:30.303 "code": -1, 00:36:30.303 "message": "Operation not permitted" 00:36:30.303 } 00:36:30.303 19:37:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:30.303 19:37:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:30.303 19:37:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:30.303 19:37:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:30.303 19:37:53 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.U4DNYBpbKs 00:36:30.303 19:37:53 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.U4DNYBpbKs 00:36:30.303 19:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.U4DNYBpbKs 00:36:30.303 19:37:53 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.U4DNYBpbKs 00:36:30.561 19:37:53 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:30.561 19:37:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:30.561 19:37:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:30.561 19:37:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:30.561 19:37:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:30.561 19:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.561 19:37:53 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:30.561 19:37:53 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:30.561 19:37:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:30.561 19:37:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:30.561 19:37:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:30.561 19:37:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:30.561 19:37:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:30.561 19:37:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:30.561 19:37:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:30.561 19:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:30.819 [2024-11-26 19:37:53.779331] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.U4DNYBpbKs': No such file or directory 00:36:30.819 [2024-11-26 19:37:53.779358] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:30.819 [2024-11-26 19:37:53.779374] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:30.819 [2024-11-26 19:37:53.779381] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:30.819 [2024-11-26 19:37:53.779388] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:30.819 [2024-11-26 19:37:53.779394] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:30.819 request: 00:36:30.819 { 00:36:30.819 "name": "nvme0", 00:36:30.819 "trtype": "tcp", 00:36:30.819 "traddr": "127.0.0.1", 00:36:30.819 "adrfam": "ipv4", 00:36:30.819 "trsvcid": "4420", 00:36:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:30.819 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:30.819 "prchk_reftag": false, 00:36:30.819 "prchk_guard": false, 00:36:30.819 "hdgst": false, 00:36:30.819 "ddgst": false, 00:36:30.819 "psk": "key0", 00:36:30.819 "allow_unrecognized_csi": false, 00:36:30.819 "method": "bdev_nvme_attach_controller", 00:36:30.819 "req_id": 1 00:36:30.819 } 00:36:30.819 Got JSON-RPC error response 00:36:30.819 response: 00:36:30.819 { 00:36:30.819 "code": -19, 00:36:30.819 "message": "No such device" 00:36:30.819 } 00:36:30.819 19:37:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:30.819 19:37:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:30.819 19:37:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:30.819 19:37:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:30.819 19:37:53 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:30.819 19:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:31.078 19:37:53 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:31.078 19:37:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:31.078 19:37:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:31.078 19:37:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:31.078 19:37:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:31.078 19:37:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:31.078 19:37:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ra85WiYjwX 00:36:31.078 19:37:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:31.078 19:37:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:31.078 19:37:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:31.078 19:37:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:31.078 19:37:53 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:31.078 19:37:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:31.078 19:37:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:31.078 19:37:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ra85WiYjwX 00:36:31.078 19:37:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ra85WiYjwX 00:36:31.078 19:37:54 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.ra85WiYjwX 00:36:31.078 19:37:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ra85WiYjwX 00:36:31.078 19:37:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ra85WiYjwX 00:36:31.337 19:37:54 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:31.337 19:37:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:31.595 nvme0n1 00:36:31.595 19:37:54 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:31.595 19:37:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:31.595 19:37:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.595 19:37:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.595 19:37:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:31.595 19:37:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.595 19:37:54 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:31.595 19:37:54 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:31.595 19:37:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:31.853 19:37:54 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:31.853 19:37:54 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:31.853 19:37:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.853 19:37:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:31.853 19:37:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.111 19:37:55 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:32.111 19:37:55 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:32.111 19:37:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:32.111 19:37:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:32.111 19:37:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:32.111 19:37:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:32.111 19:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.466 19:37:55 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:32.466 19:37:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:32.466 19:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:32.466 19:37:55 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:32.466 19:37:55 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:32.466 19:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.732 19:37:55 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:32.732 19:37:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ra85WiYjwX 00:36:32.732 19:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ra85WiYjwX 00:36:33.005 19:37:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TU7WaWE7ox 00:36:33.005 19:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TU7WaWE7ox 00:36:33.005 19:37:56 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.005 19:37:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.283 nvme0n1 00:36:33.283 19:37:56 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:33.283 19:37:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:33.560 19:37:56 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:33.560 "subsystems": [ 00:36:33.560 { 00:36:33.560 "subsystem": "keyring", 00:36:33.560 "config": [ 00:36:33.560 { 00:36:33.560 "method": "keyring_file_add_key", 00:36:33.560 "params": { 00:36:33.560 "name": "key0", 00:36:33.560 "path": "/tmp/tmp.ra85WiYjwX" 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "keyring_file_add_key", 00:36:33.560 "params": { 00:36:33.560 "name": "key1", 00:36:33.560 "path": "/tmp/tmp.TU7WaWE7ox" 00:36:33.560 } 00:36:33.560 } 00:36:33.560 ] 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "subsystem": "iobuf", 00:36:33.560 "config": [ 00:36:33.560 { 00:36:33.560 "method": "iobuf_set_options", 00:36:33.560 "params": { 00:36:33.560 "small_pool_count": 8192, 00:36:33.560 "large_pool_count": 1024, 00:36:33.560 "small_bufsize": 8192, 00:36:33.560 "large_bufsize": 135168, 00:36:33.560 "enable_numa": false 00:36:33.560 } 00:36:33.560 } 00:36:33.560 ] 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "subsystem": "sock", 00:36:33.560 "config": [ 00:36:33.560 { 00:36:33.560 "method": "sock_set_default_impl", 00:36:33.560 "params": { 00:36:33.560 "impl_name": "posix" 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "sock_impl_set_options", 00:36:33.560 "params": { 00:36:33.560 "impl_name": "ssl", 00:36:33.560 "recv_buf_size": 4096, 00:36:33.560 "send_buf_size": 4096, 00:36:33.560 "enable_recv_pipe": true, 00:36:33.560 "enable_quickack": false, 00:36:33.560 "enable_placement_id": 0, 00:36:33.560 "enable_zerocopy_send_server": true, 00:36:33.560 "enable_zerocopy_send_client": false, 00:36:33.560 "zerocopy_threshold": 0, 00:36:33.560 "tls_version": 0, 00:36:33.560 "enable_ktls": false 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "sock_impl_set_options", 00:36:33.560 "params": { 00:36:33.560 "impl_name": "posix", 00:36:33.560 "recv_buf_size": 2097152, 00:36:33.560 "send_buf_size": 2097152, 00:36:33.560 "enable_recv_pipe": true, 00:36:33.560 "enable_quickack": false, 00:36:33.560 "enable_placement_id": 0, 00:36:33.560 "enable_zerocopy_send_server": true, 00:36:33.560 "enable_zerocopy_send_client": false, 00:36:33.560 "zerocopy_threshold": 0, 00:36:33.560 "tls_version": 0, 00:36:33.560 "enable_ktls": false 00:36:33.560 } 00:36:33.560 } 00:36:33.560 ] 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "subsystem": "vmd", 00:36:33.560 "config": [] 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "subsystem": "accel", 00:36:33.560 "config": [ 00:36:33.560 { 00:36:33.560 "method": "accel_set_options", 00:36:33.560 "params": { 00:36:33.560 "small_cache_size": 128, 00:36:33.560 "large_cache_size": 16, 00:36:33.560 "task_count": 2048, 00:36:33.560 "sequence_count": 2048, 00:36:33.560 "buf_count": 2048 00:36:33.560 } 00:36:33.560 } 00:36:33.560 ] 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "subsystem": "bdev", 00:36:33.560 "config": [ 00:36:33.560 { 00:36:33.560 "method": "bdev_set_options", 00:36:33.560 "params": { 00:36:33.560 "bdev_io_pool_size": 65535, 00:36:33.560 "bdev_io_cache_size": 256, 00:36:33.560 "bdev_auto_examine": true, 00:36:33.560 "iobuf_small_cache_size": 128, 00:36:33.560 "iobuf_large_cache_size": 16 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "bdev_raid_set_options", 00:36:33.560 "params": { 00:36:33.560 "process_window_size_kb": 1024, 00:36:33.560 "process_max_bandwidth_mb_sec": 0 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "bdev_iscsi_set_options", 00:36:33.560 "params": { 00:36:33.560 "timeout_sec": 30 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "bdev_nvme_set_options", 00:36:33.560 "params": { 00:36:33.560 "action_on_timeout": "none", 00:36:33.560 "timeout_us": 0, 00:36:33.560 "timeout_admin_us": 0, 00:36:33.560 "keep_alive_timeout_ms": 10000, 00:36:33.560 "arbitration_burst": 0, 00:36:33.560 "low_priority_weight": 0, 00:36:33.560 "medium_priority_weight": 0, 00:36:33.560 "high_priority_weight": 0, 00:36:33.560 "nvme_adminq_poll_period_us": 10000, 00:36:33.560 "nvme_ioq_poll_period_us": 0, 00:36:33.560 "io_queue_requests": 512, 00:36:33.560 "delay_cmd_submit": true, 00:36:33.560 "transport_retry_count": 4, 00:36:33.560 "bdev_retry_count": 3, 00:36:33.560 "transport_ack_timeout": 0, 00:36:33.560 "ctrlr_loss_timeout_sec": 0, 00:36:33.560 "reconnect_delay_sec": 0, 00:36:33.560 "fast_io_fail_timeout_sec": 0, 00:36:33.560 "disable_auto_failback": false, 00:36:33.560 "generate_uuids": false, 00:36:33.560 "transport_tos": 0, 00:36:33.560 "nvme_error_stat": false, 00:36:33.560 "rdma_srq_size": 0, 00:36:33.560 "io_path_stat": false, 00:36:33.560 "allow_accel_sequence": false, 00:36:33.560 "rdma_max_cq_size": 0, 00:36:33.560 "rdma_cm_event_timeout_ms": 0, 00:36:33.560 "dhchap_digests": [ 00:36:33.560 "sha256", 00:36:33.560 "sha384", 00:36:33.560 "sha512" 00:36:33.560 ], 00:36:33.560 "dhchap_dhgroups": [ 00:36:33.560 "null", 00:36:33.560 "ffdhe2048", 00:36:33.560 "ffdhe3072", 00:36:33.560 "ffdhe4096", 00:36:33.560 "ffdhe6144", 00:36:33.560 "ffdhe8192" 00:36:33.560 ] 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "bdev_nvme_attach_controller", 00:36:33.560 "params": { 00:36:33.560 "name": "nvme0", 00:36:33.560 "trtype": "TCP", 00:36:33.560 "adrfam": "IPv4", 00:36:33.560 "traddr": "127.0.0.1", 00:36:33.560 "trsvcid": "4420", 00:36:33.560 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.560 "prchk_reftag": false, 00:36:33.560 "prchk_guard": false, 00:36:33.560 "ctrlr_loss_timeout_sec": 0, 00:36:33.560 "reconnect_delay_sec": 0, 00:36:33.560 "fast_io_fail_timeout_sec": 0, 00:36:33.560 "psk": "key0", 00:36:33.560 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.560 "hdgst": false, 00:36:33.560 "ddgst": false, 00:36:33.560 "multipath": "multipath" 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "bdev_nvme_set_hotplug", 00:36:33.560 "params": { 00:36:33.560 "period_us": 100000, 00:36:33.560 "enable": false 00:36:33.560 } 00:36:33.560 }, 00:36:33.560 { 00:36:33.560 "method": "bdev_wait_for_examine" 00:36:33.560 } 00:36:33.560 ] 00:36:33.561 }, 00:36:33.561 { 00:36:33.561 "subsystem": "nbd", 00:36:33.561 "config": [] 00:36:33.561 } 00:36:33.561 ] 00:36:33.561 }' 00:36:33.561 19:37:56 keyring_file -- keyring/file.sh@115 -- # killprocess 4057628 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4057628 ']' 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4057628 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4057628 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4057628' 00:36:33.561 killing process with pid 4057628 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@973 -- # kill 4057628 00:36:33.561 Received shutdown signal, test time was about 1.000000 seconds 00:36:33.561 00:36:33.561 Latency(us) 00:36:33.561 [2024-11-26T18:37:56.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.561 [2024-11-26T18:37:56.675Z] =================================================================================================================== 00:36:33.561 [2024-11-26T18:37:56.675Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.561 19:37:56 keyring_file -- common/autotest_common.sh@978 -- # wait 4057628 00:36:33.830 19:37:56 keyring_file -- keyring/file.sh@118 -- # bperfpid=4059152 00:36:33.830 19:37:56 keyring_file -- keyring/file.sh@120 -- # waitforlisten 4059152 /var/tmp/bperf.sock 00:36:33.830 19:37:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4059152 ']' 00:36:33.830 19:37:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:33.830 19:37:56 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:33.830 19:37:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:33.830 19:37:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:33.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:33.830 19:37:56 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:33.830 "subsystems": [ 00:36:33.830 { 00:36:33.830 "subsystem": "keyring", 00:36:33.830 "config": [ 00:36:33.830 { 00:36:33.830 "method": "keyring_file_add_key", 00:36:33.830 "params": { 00:36:33.830 "name": "key0", 00:36:33.830 "path": "/tmp/tmp.ra85WiYjwX" 00:36:33.830 } 00:36:33.830 }, 00:36:33.830 { 00:36:33.830 "method": "keyring_file_add_key", 00:36:33.830 "params": { 00:36:33.830 "name": "key1", 00:36:33.830 "path": "/tmp/tmp.TU7WaWE7ox" 00:36:33.830 } 00:36:33.830 } 00:36:33.830 ] 00:36:33.830 }, 00:36:33.830 { 00:36:33.830 "subsystem": "iobuf", 00:36:33.830 "config": [ 00:36:33.830 { 00:36:33.830 "method": "iobuf_set_options", 00:36:33.830 "params": { 00:36:33.830 "small_pool_count": 8192, 00:36:33.830 "large_pool_count": 1024, 00:36:33.830 "small_bufsize": 8192, 00:36:33.830 "large_bufsize": 135168, 00:36:33.830 "enable_numa": false 00:36:33.830 } 00:36:33.830 } 00:36:33.830 ] 00:36:33.830 }, 00:36:33.830 { 00:36:33.830 "subsystem": "sock", 00:36:33.830 "config": [ 00:36:33.830 { 00:36:33.830 "method": "sock_set_default_impl", 00:36:33.830 "params": { 00:36:33.830 "impl_name": "posix" 00:36:33.830 } 00:36:33.830 }, 00:36:33.830 { 00:36:33.830 "method": "sock_impl_set_options", 00:36:33.830 "params": { 00:36:33.830 "impl_name": "ssl", 00:36:33.830 "recv_buf_size": 4096, 00:36:33.830 "send_buf_size": 4096, 00:36:33.830 "enable_recv_pipe": true, 00:36:33.830 "enable_quickack": false, 00:36:33.830 "enable_placement_id": 0, 00:36:33.830 "enable_zerocopy_send_server": true, 00:36:33.830 "enable_zerocopy_send_client": false, 00:36:33.830 "zerocopy_threshold": 0, 00:36:33.830 "tls_version": 0, 00:36:33.830 "enable_ktls": false 00:36:33.830 } 00:36:33.830 }, 00:36:33.830 { 00:36:33.830 "method": "sock_impl_set_options", 00:36:33.830 "params": { 00:36:33.830 "impl_name": "posix", 00:36:33.830 "recv_buf_size": 2097152, 00:36:33.830 "send_buf_size": 2097152, 00:36:33.830 "enable_recv_pipe": true, 00:36:33.830 "enable_quickack": false, 00:36:33.830 "enable_placement_id": 0, 00:36:33.830 "enable_zerocopy_send_server": true, 00:36:33.830 "enable_zerocopy_send_client": false, 00:36:33.830 "zerocopy_threshold": 0, 00:36:33.830 "tls_version": 0, 00:36:33.830 "enable_ktls": false 00:36:33.830 } 00:36:33.830 } 00:36:33.830 ] 00:36:33.830 }, 00:36:33.830 { 00:36:33.830 "subsystem": "vmd", 00:36:33.830 "config": [] 00:36:33.830 }, 00:36:33.830 { 00:36:33.830 "subsystem": "accel", 00:36:33.830 "config": [ 00:36:33.830 { 00:36:33.830 "method": "accel_set_options", 00:36:33.830 "params": { 00:36:33.830 "small_cache_size": 128, 00:36:33.830 "large_cache_size": 16, 00:36:33.830 "task_count": 2048, 00:36:33.830 "sequence_count": 2048, 00:36:33.830 "buf_count": 2048 00:36:33.830 } 00:36:33.830 } 00:36:33.830 ] 00:36:33.830 }, 00:36:33.830 { 00:36:33.830 "subsystem": "bdev", 00:36:33.830 "config": [ 00:36:33.830 { 00:36:33.830 "method": "bdev_set_options", 00:36:33.830 "params": { 00:36:33.831 "bdev_io_pool_size": 65535, 00:36:33.831 "bdev_io_cache_size": 256, 00:36:33.831 "bdev_auto_examine": true, 00:36:33.831 "iobuf_small_cache_size": 128, 00:36:33.831 "iobuf_large_cache_size": 16 00:36:33.831 } 00:36:33.831 }, 00:36:33.831 { 00:36:33.831 "method": "bdev_raid_set_options", 00:36:33.831 "params": { 00:36:33.831 "process_window_size_kb": 1024, 00:36:33.831 "process_max_bandwidth_mb_sec": 0 00:36:33.831 } 00:36:33.831 }, 00:36:33.831 { 00:36:33.831 "method": "bdev_iscsi_set_options", 00:36:33.831 "params": { 00:36:33.831 "timeout_sec": 30 00:36:33.831 } 00:36:33.831 }, 00:36:33.831 { 00:36:33.831 "method": "bdev_nvme_set_options", 00:36:33.831 "params": { 00:36:33.831 "action_on_timeout": "none", 00:36:33.831 "timeout_us": 0, 00:36:33.831 "timeout_admin_us": 0, 00:36:33.831 "keep_alive_timeout_ms": 10000, 00:36:33.831 "arbitration_burst": 0, 00:36:33.831 "low_priority_weight": 0, 00:36:33.831 "medium_priority_weight": 0, 00:36:33.831 "high_priority_weight": 0, 00:36:33.831 "nvme_adminq_poll_period_us": 10000, 00:36:33.831 "nvme_ioq_poll_period_us": 0, 00:36:33.831 "io_queue_requests": 512, 00:36:33.831 "delay_cmd_submit": true, 00:36:33.831 "transport_retry_count": 4, 00:36:33.831 "bdev_retry_count": 3, 00:36:33.831 "transport_ack_timeout": 0, 00:36:33.831 "ctrlr_loss_timeout_sec": 0, 00:36:33.831 "reconnect_delay_sec": 0, 00:36:33.831 "fast_io_fail_timeout_sec": 0, 00:36:33.831 "disable_auto_failback": false, 00:36:33.831 "generate_uuids": false, 00:36:33.831 "transport_tos": 0, 00:36:33.831 "nvme_error_stat": false, 00:36:33.831 "rdma_srq_size": 0, 00:36:33.831 "io_path_stat": false, 00:36:33.831 "allow_accel_sequence": false, 00:36:33.831 "rdma_max_cq_size": 0, 00:36:33.831 "rdma_cm_event_timeout_ms": 0, 00:36:33.831 "dhchap_digests": [ 00:36:33.831 "sha256", 00:36:33.831 "sha384", 00:36:33.831 "sha512" 00:36:33.831 ], 00:36:33.831 "dhchap_dhgroups": [ 00:36:33.831 "null", 00:36:33.831 "ffdhe2048", 00:36:33.831 "ffdhe3072", 00:36:33.831 "ffdhe4096", 00:36:33.831 "ffdhe6144", 00:36:33.831 "ffdhe8192" 00:36:33.831 ] 00:36:33.831 } 00:36:33.831 }, 00:36:33.831 { 00:36:33.831 "method": "bdev_nvme_attach_controller", 00:36:33.831 "params": { 00:36:33.831 "name": "nvme0", 00:36:33.831 "trtype": "TCP", 00:36:33.831 "adrfam": "IPv4", 00:36:33.831 "traddr": "127.0.0.1", 00:36:33.831 "trsvcid": "4420", 00:36:33.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.831 "prchk_reftag": false, 00:36:33.831 "prchk_guard": false, 00:36:33.831 "ctrlr_loss_timeout_sec": 0, 00:36:33.831 "reconnect_delay_sec": 0, 00:36:33.831 "fast_io_fail_timeout_sec": 0, 00:36:33.831 "psk": "key0", 00:36:33.831 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.831 "hdgst": false, 00:36:33.831 "ddgst": false, 00:36:33.831 "multipath": "multipath" 00:36:33.831 } 00:36:33.831 }, 00:36:33.831 { 00:36:33.831 "method": "bdev_nvme_set_hotplug", 00:36:33.831 "params": { 00:36:33.831 "period_us": 100000, 00:36:33.831 "enable": false 00:36:33.831 } 00:36:33.831 }, 00:36:33.831 { 00:36:33.831 "method": "bdev_wait_for_examine" 00:36:33.831 } 00:36:33.831 ] 00:36:33.831 }, 00:36:33.831 { 00:36:33.831 "subsystem": "nbd", 00:36:33.831 "config": [] 00:36:33.831 } 00:36:33.831 ] 00:36:33.831 }' 00:36:33.831 19:37:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:33.831 19:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:33.831 [2024-11-26 19:37:56.788376] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:36:33.831 [2024-11-26 19:37:56.788425] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059152 ] 00:36:33.831 [2024-11-26 19:37:56.861918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.831 [2024-11-26 19:37:56.904073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.090 [2024-11-26 19:37:57.066392] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:34.656 19:37:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:34.656 19:37:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:34.656 19:37:57 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:34.656 19:37:57 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:34.656 19:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.914 19:37:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:34.914 19:37:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:34.914 19:37:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:34.914 19:37:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:34.914 19:37:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.914 19:37:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:34.914 19:37:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.914 19:37:58 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:34.914 19:37:58 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:35.171 19:37:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:35.171 19:37:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.172 19:37:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.172 19:37:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.172 19:37:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.172 19:37:58 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:35.172 19:37:58 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:35.172 19:37:58 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:35.172 19:37:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:35.430 19:37:58 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:35.430 19:37:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:35.430 19:37:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ra85WiYjwX /tmp/tmp.TU7WaWE7ox 00:36:35.430 19:37:58 keyring_file -- keyring/file.sh@20 -- # killprocess 4059152 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4059152 ']' 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4059152 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4059152 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4059152' 00:36:35.430 killing process with pid 4059152 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@973 -- # kill 4059152 00:36:35.430 Received shutdown signal, test time was about 1.000000 seconds 00:36:35.430 00:36:35.430 Latency(us) 00:36:35.430 [2024-11-26T18:37:58.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.430 [2024-11-26T18:37:58.544Z] =================================================================================================================== 00:36:35.430 [2024-11-26T18:37:58.544Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:35.430 19:37:58 keyring_file -- common/autotest_common.sh@978 -- # wait 4059152 00:36:35.689 19:37:58 keyring_file -- keyring/file.sh@21 -- # killprocess 4057621 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4057621 ']' 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4057621 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4057621 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4057621' 00:36:35.689 killing process with pid 4057621 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@973 -- # kill 4057621 00:36:35.689 19:37:58 keyring_file -- common/autotest_common.sh@978 -- # wait 4057621 00:36:35.948 00:36:35.948 real 0m11.645s 00:36:35.948 user 0m28.845s 00:36:35.948 sys 0m2.683s 00:36:35.948 19:37:58 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:35.948 19:37:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:35.948 ************************************ 00:36:35.948 END TEST keyring_file 00:36:35.948 ************************************ 00:36:35.948 19:37:59 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:35.948 19:37:59 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:35.948 19:37:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:35.948 19:37:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:35.948 19:37:59 -- common/autotest_common.sh@10 -- # set +x 00:36:35.948 ************************************ 00:36:35.948 START TEST keyring_linux 00:36:35.948 ************************************ 00:36:35.948 19:37:59 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:36.207 Joined session keyring: 398961394 00:36:36.207 * Looking for test storage... 00:36:36.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:36.207 19:37:59 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:36.207 19:37:59 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:36:36.207 19:37:59 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:36.207 19:37:59 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:36.207 19:37:59 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:36.208 19:37:59 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:36.208 19:37:59 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.208 --rc genhtml_branch_coverage=1 00:36:36.208 --rc genhtml_function_coverage=1 00:36:36.208 --rc genhtml_legend=1 00:36:36.208 --rc geninfo_all_blocks=1 00:36:36.208 --rc geninfo_unexecuted_blocks=1 00:36:36.208 00:36:36.208 ' 00:36:36.208 19:37:59 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.208 --rc genhtml_branch_coverage=1 00:36:36.208 --rc genhtml_function_coverage=1 00:36:36.208 --rc genhtml_legend=1 00:36:36.208 --rc geninfo_all_blocks=1 00:36:36.208 --rc geninfo_unexecuted_blocks=1 00:36:36.208 00:36:36.208 ' 00:36:36.208 19:37:59 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.208 --rc genhtml_branch_coverage=1 00:36:36.208 --rc genhtml_function_coverage=1 00:36:36.208 --rc genhtml_legend=1 00:36:36.208 --rc geninfo_all_blocks=1 00:36:36.208 --rc geninfo_unexecuted_blocks=1 00:36:36.208 00:36:36.208 ' 00:36:36.208 19:37:59 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.208 --rc genhtml_branch_coverage=1 00:36:36.208 --rc genhtml_function_coverage=1 00:36:36.208 --rc genhtml_legend=1 00:36:36.208 --rc geninfo_all_blocks=1 00:36:36.208 --rc geninfo_unexecuted_blocks=1 00:36:36.208 00:36:36.208 ' 00:36:36.208 19:37:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.208 19:37:59 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.208 19:37:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.208 19:37:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.208 19:37:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.208 19:37:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:36.208 19:37:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:36.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:36.208 19:37:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:36.208 19:37:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:36.208 19:37:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:36.208 19:37:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:36.208 19:37:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:36.208 19:37:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:36.208 19:37:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:36.208 /tmp/:spdk-test:key0 00:36:36.208 19:37:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:36.208 19:37:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:36.209 19:37:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:36.209 19:37:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:36.209 19:37:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:36.209 19:37:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:36.209 19:37:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:36.209 19:37:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:36.209 19:37:59 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:36.209 19:37:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:36.209 19:37:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:36.467 19:37:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:36.467 19:37:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:36.467 /tmp/:spdk-test:key1 00:36:36.467 19:37:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4059703 00:36:36.468 19:37:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:36.468 19:37:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4059703 00:36:36.468 19:37:59 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 4059703 ']' 00:36:36.468 19:37:59 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.468 19:37:59 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:36.468 19:37:59 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.468 19:37:59 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:36.468 19:37:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:36.468 [2024-11-26 19:37:59.390704] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:36:36.468 [2024-11-26 19:37:59.390751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059703 ] 00:36:36.468 [2024-11-26 19:37:59.465472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:36.468 [2024-11-26 19:37:59.504890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:37.405 19:38:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:37.405 [2024-11-26 19:38:00.233485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.405 null0 00:36:37.405 [2024-11-26 19:38:00.265543] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:37.405 [2024-11-26 19:38:00.265939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.405 19:38:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:37.405 119892326 00:36:37.405 19:38:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:37.405 556872495 00:36:37.405 19:38:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4059902 00:36:37.405 19:38:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4059902 /var/tmp/bperf.sock 00:36:37.405 19:38:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 4059902 ']' 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:37.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:37.405 [2024-11-26 19:38:00.338743] Starting SPDK v25.01-pre git sha1 b09de013a / DPDK 24.03.0 initialization... 00:36:37.405 [2024-11-26 19:38:00.338787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059902 ] 00:36:37.405 [2024-11-26 19:38:00.413218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.405 [2024-11-26 19:38:00.455118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.405 19:38:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:37.405 19:38:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:37.405 19:38:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:37.662 19:38:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:37.662 19:38:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:37.918 19:38:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:37.918 19:38:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:38.175 [2024-11-26 19:38:01.091966] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:38.175 nvme0n1 00:36:38.175 19:38:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:38.175 19:38:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:38.175 19:38:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:38.175 19:38:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:38.175 19:38:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:38.175 19:38:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.431 19:38:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:38.431 19:38:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:38.431 19:38:01 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:38.431 19:38:01 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:38.431 19:38:01 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.431 19:38:01 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:38.431 19:38:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.689 19:38:01 keyring_linux -- keyring/linux.sh@25 -- # sn=119892326 00:36:38.689 19:38:01 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:38.689 19:38:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:38.689 19:38:01 keyring_linux -- keyring/linux.sh@26 -- # [[ 119892326 == \1\1\9\8\9\2\3\2\6 ]] 00:36:38.689 19:38:01 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 119892326 00:36:38.689 19:38:01 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:38.689 19:38:01 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:38.689 Running I/O for 1 seconds... 00:36:39.622 21621.00 IOPS, 84.46 MiB/s 00:36:39.622 Latency(us) 00:36:39.622 [2024-11-26T18:38:02.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.622 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:39.622 nvme0n1 : 1.01 21621.30 84.46 0.00 0.00 5900.56 4993.22 14480.34 00:36:39.622 [2024-11-26T18:38:02.736Z] =================================================================================================================== 00:36:39.622 [2024-11-26T18:38:02.736Z] Total : 21621.30 84.46 0.00 0.00 5900.56 4993.22 14480.34 00:36:39.622 { 00:36:39.622 "results": [ 00:36:39.622 { 00:36:39.622 "job": "nvme0n1", 00:36:39.622 "core_mask": "0x2", 00:36:39.622 "workload": "randread", 00:36:39.622 "status": "finished", 00:36:39.622 "queue_depth": 128, 00:36:39.622 "io_size": 4096, 00:36:39.622 "runtime": 1.005906, 00:36:39.622 "iops": 21621.304575178994, 00:36:39.622 "mibps": 84.45822099679295, 00:36:39.622 "io_failed": 0, 00:36:39.622 "io_timeout": 0, 00:36:39.622 "avg_latency_us": 5900.56238408334, 00:36:39.622 "min_latency_us": 4993.219047619048, 00:36:39.622 "max_latency_us": 14480.335238095238 00:36:39.622 } 00:36:39.622 ], 00:36:39.622 "core_count": 1 00:36:39.622 } 00:36:39.622 19:38:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:39.622 19:38:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:39.880 19:38:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:39.880 19:38:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:39.880 19:38:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:39.880 19:38:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:39.880 19:38:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:39.880 19:38:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.137 19:38:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:40.137 19:38:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:40.137 19:38:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:40.137 19:38:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:40.137 19:38:03 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:40.137 19:38:03 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:40.137 19:38:03 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:40.137 19:38:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:40.137 19:38:03 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:40.137 19:38:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:40.137 19:38:03 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:40.137 19:38:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:40.393 [2024-11-26 19:38:03.264181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:40.394 [2024-11-26 19:38:03.264207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1076fa0 (107): Transport endpoint is not connected 00:36:40.394 [2024-11-26 19:38:03.265202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1076fa0 (9): Bad file descriptor 00:36:40.394 [2024-11-26 19:38:03.266203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:40.394 [2024-11-26 19:38:03.266213] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:40.394 [2024-11-26 19:38:03.266220] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:40.394 [2024-11-26 19:38:03.266228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:40.394 request: 00:36:40.394 { 00:36:40.394 "name": "nvme0", 00:36:40.394 "trtype": "tcp", 00:36:40.394 "traddr": "127.0.0.1", 00:36:40.394 "adrfam": "ipv4", 00:36:40.394 "trsvcid": "4420", 00:36:40.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.394 "prchk_reftag": false, 00:36:40.394 "prchk_guard": false, 00:36:40.394 "hdgst": false, 00:36:40.394 "ddgst": false, 00:36:40.394 "psk": ":spdk-test:key1", 00:36:40.394 "allow_unrecognized_csi": false, 00:36:40.394 "method": "bdev_nvme_attach_controller", 00:36:40.394 "req_id": 1 00:36:40.394 } 00:36:40.394 Got JSON-RPC error response 00:36:40.394 response: 00:36:40.394 { 00:36:40.394 "code": -5, 00:36:40.394 "message": "Input/output error" 00:36:40.394 } 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@33 -- # sn=119892326 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 119892326 00:36:40.394 1 links removed 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@33 -- # sn=556872495 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 556872495 00:36:40.394 1 links removed 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4059902 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 4059902 ']' 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 4059902 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4059902 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4059902' 00:36:40.394 killing process with pid 4059902 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@973 -- # kill 4059902 00:36:40.394 Received shutdown signal, test time was about 1.000000 seconds 00:36:40.394 00:36:40.394 Latency(us) 00:36:40.394 [2024-11-26T18:38:03.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.394 [2024-11-26T18:38:03.508Z] =================================================================================================================== 00:36:40.394 [2024-11-26T18:38:03.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@978 -- # wait 4059902 00:36:40.394 19:38:03 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4059703 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 4059703 ']' 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 4059703 00:36:40.394 19:38:03 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:40.652 19:38:03 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.652 19:38:03 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4059703 00:36:40.652 19:38:03 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:40.652 19:38:03 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:40.652 19:38:03 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4059703' 00:36:40.652 killing process with pid 4059703 00:36:40.652 19:38:03 keyring_linux -- common/autotest_common.sh@973 -- # kill 4059703 00:36:40.652 19:38:03 keyring_linux -- common/autotest_common.sh@978 -- # wait 4059703 00:36:40.909 00:36:40.909 real 0m4.806s 00:36:40.909 user 0m8.714s 00:36:40.909 sys 0m1.485s 00:36:40.909 19:38:03 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.909 19:38:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:40.909 ************************************ 00:36:40.909 END TEST keyring_linux 00:36:40.909 ************************************ 00:36:40.909 19:38:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:40.909 19:38:03 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:40.909 19:38:03 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:40.909 19:38:03 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:40.909 19:38:03 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:40.909 19:38:03 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:40.909 19:38:03 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:40.909 19:38:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:40.909 19:38:03 -- common/autotest_common.sh@10 -- # set +x 00:36:40.909 19:38:03 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:40.909 19:38:03 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:40.909 19:38:03 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:40.909 19:38:03 -- common/autotest_common.sh@10 -- # set +x 00:36:46.179 INFO: APP EXITING 00:36:46.179 INFO: killing all VMs 00:36:46.179 INFO: killing vhost app 00:36:46.179 INFO: EXIT DONE 00:36:48.713 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:48.713 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:48.713 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:48.713 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:48.713 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:48.713 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:48.713 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:48.713 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:48.714 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:52.002 Cleaning 00:36:52.002 Removing: /var/run/dpdk/spdk0/config 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:52.002 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:52.002 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:52.002 Removing: /var/run/dpdk/spdk1/config 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:52.002 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:52.002 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:52.002 Removing: /var/run/dpdk/spdk2/config 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:52.002 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:52.002 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:52.002 Removing: /var/run/dpdk/spdk3/config 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:52.002 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:52.002 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:52.002 Removing: /var/run/dpdk/spdk4/config 00:36:52.002 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:52.003 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:52.003 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:52.003 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:52.003 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:52.003 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:52.003 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:52.003 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:52.003 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:52.003 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:52.003 Removing: /dev/shm/bdev_svc_trace.1 00:36:52.003 Removing: /dev/shm/nvmf_trace.0 00:36:52.003 Removing: /dev/shm/spdk_tgt_trace.pid3546723 00:36:52.003 Removing: /var/run/dpdk/spdk0 00:36:52.003 Removing: /var/run/dpdk/spdk1 00:36:52.003 Removing: /var/run/dpdk/spdk2 00:36:52.003 Removing: /var/run/dpdk/spdk3 00:36:52.003 Removing: /var/run/dpdk/spdk4 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3543844 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3545039 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3546723 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3547372 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3548318 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3548421 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3549431 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3549537 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3549879 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3551414 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3552911 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3553205 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3553502 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3553808 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3554100 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3554353 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3554600 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3554887 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3555629 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3558635 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3558893 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3559148 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3559159 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3559645 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3559702 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3560152 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3560231 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3560638 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3560659 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3560917 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3560935 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3561490 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3561736 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3562037 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3565965 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3570232 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3580437 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3580956 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3585459 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3585730 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3590485 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3596399 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3599007 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3609437 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3618362 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3620109 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3621041 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3638510 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3642459 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3687715 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3693481 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3699571 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3706106 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3706108 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3707020 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3707745 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3708635 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3709232 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3709321 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3709553 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3709566 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3709632 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3710483 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3711395 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3712318 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3712787 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3712814 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3713162 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3714254 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3715242 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3723333 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3753313 00:36:52.003 Removing: /var/run/dpdk/spdk_pid3757810 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3759440 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3761245 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3761480 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3761538 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3761734 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3762238 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3764189 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3765432 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3765855 00:36:52.262 Removing: /var/run/dpdk/spdk_pid3768111 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3768494 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3769163 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3773436 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3779782 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3779783 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3779784 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3783605 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3792199 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3796079 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3802221 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3803313 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3804852 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3806176 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3810991 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3815719 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3819739 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3827372 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3827378 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3831883 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3832102 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3832331 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3832788 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3832796 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3837419 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3837898 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3842404 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3844943 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3850340 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3855899 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3865092 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3872193 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3872213 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3891044 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3891680 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3892162 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3892731 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3893369 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3894015 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3894532 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3895009 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3899254 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3899491 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3905583 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3905809 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3911619 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3915853 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3925575 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3926139 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3930415 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3930739 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3934795 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3940542 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3943068 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3954574 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3966404 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3969724 00:36:52.263 Removing: /var/run/dpdk/spdk_pid3971802 00:36:52.522 Removing: /var/run/dpdk/spdk_pid3998700 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4003254 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4008938 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4022996 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4023023 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4029309 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4031999 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4035854 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4037368 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4039603 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4040701 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4049447 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4049910 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4050371 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4052859 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4053327 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4053791 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4057621 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4057628 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4059152 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4059703 00:36:52.522 Removing: /var/run/dpdk/spdk_pid4059902 00:36:52.522 Clean 00:36:52.522 19:38:15 -- common/autotest_common.sh@1453 -- # return 0 00:36:52.522 19:38:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:52.522 19:38:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.522 19:38:15 -- common/autotest_common.sh@10 -- # set +x 00:36:52.522 19:38:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:52.522 19:38:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.522 19:38:15 -- common/autotest_common.sh@10 -- # set +x 00:36:52.522 19:38:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:52.522 19:38:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:52.522 19:38:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:52.522 19:38:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:52.522 19:38:15 -- spdk/autotest.sh@398 -- # hostname 00:36:52.523 19:38:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:52.782 geninfo: WARNING: invalid characters removed from testname! 00:37:14.717 19:38:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.096 19:38:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:18.005 19:38:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:19.383 19:38:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:21.315 19:38:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:23.218 19:38:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:25.122 19:38:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:25.122 19:38:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:25.122 19:38:47 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:25.122 19:38:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:25.122 19:38:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:25.122 19:38:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:25.122 + [[ -n 3467073 ]] 00:37:25.122 + sudo kill 3467073 00:37:25.132 [Pipeline] } 00:37:25.148 [Pipeline] // stage 00:37:25.153 [Pipeline] } 00:37:25.168 [Pipeline] // timeout 00:37:25.175 [Pipeline] } 00:37:25.192 [Pipeline] // catchError 00:37:25.199 [Pipeline] } 00:37:25.220 [Pipeline] // wrap 00:37:25.229 [Pipeline] } 00:37:25.245 [Pipeline] // catchError 00:37:25.256 [Pipeline] stage 00:37:25.258 [Pipeline] { (Epilogue) 00:37:25.272 [Pipeline] catchError 00:37:25.274 [Pipeline] { 00:37:25.287 [Pipeline] echo 00:37:25.288 Cleanup processes 00:37:25.294 [Pipeline] sh 00:37:25.577 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:25.577 4070390 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:25.591 [Pipeline] sh 00:37:25.876 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:25.876 ++ grep -v 'sudo pgrep' 00:37:25.876 ++ awk '{print $1}' 00:37:25.876 + sudo kill -9 00:37:25.876 + true 00:37:25.889 [Pipeline] sh 00:37:26.173 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:38.388 [Pipeline] sh 00:37:38.674 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:38.674 Artifacts sizes are good 00:37:38.688 [Pipeline] archiveArtifacts 00:37:38.696 Archiving artifacts 00:37:38.842 [Pipeline] sh 00:37:39.129 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:39.148 [Pipeline] cleanWs 00:37:39.161 [WS-CLEANUP] Deleting project workspace... 00:37:39.161 [WS-CLEANUP] Deferred wipeout is used... 00:37:39.168 [WS-CLEANUP] done 00:37:39.170 [Pipeline] } 00:37:39.189 [Pipeline] // catchError 00:37:39.204 [Pipeline] sh 00:37:39.492 + logger -p user.info -t JENKINS-CI 00:37:39.502 [Pipeline] } 00:37:39.519 [Pipeline] // stage 00:37:39.526 [Pipeline] } 00:37:39.544 [Pipeline] // node 00:37:39.549 [Pipeline] End of Pipeline 00:37:39.589 Finished: SUCCESS